00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 1826 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3087 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.133 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/centos7-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.136 The recommended git tool is: git 00:00:00.136 using credential 00000000-0000-0000-0000-000000000002 00:00:00.137 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/centos7-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.160 Fetching changes from the remote Git repository 00:00:00.161 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.187 Using shallow fetch with depth 1 00:00:00.187 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.187 > git --version # timeout=10 00:00:00.203 > git --version # 'git version 2.39.2' 00:00:00.203 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.204 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.204 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.390 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.401 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.412 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:04.412 > git config core.sparsecheckout # timeout=10 00:00:04.423 > git read-tree -mu HEAD # timeout=10 00:00:04.439 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:04.458 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:04.458 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:04.575 [Pipeline] Start of Pipeline 00:00:04.589 [Pipeline] library 00:00:04.591 Loading library shm_lib@master 00:00:04.591 Library shm_lib@master is cached. Copying from home. 00:00:04.606 [Pipeline] node 00:00:04.617 Running on VM-host-SM4 in /var/jenkins/workspace/centos7-vg-autotest 00:00:04.618 [Pipeline] { 00:00:04.630 [Pipeline] catchError 00:00:04.631 [Pipeline] { 00:00:04.645 [Pipeline] wrap 00:00:04.656 [Pipeline] { 00:00:04.662 [Pipeline] stage 00:00:04.664 [Pipeline] { (Prologue) 00:00:04.677 [Pipeline] echo 00:00:04.678 Node: VM-host-SM4 00:00:04.681 [Pipeline] cleanWs 00:00:04.688 [WS-CLEANUP] Deleting project workspace... 00:00:04.688 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.694 [WS-CLEANUP] done 00:00:04.834 [Pipeline] setCustomBuildProperty 00:00:04.890 [Pipeline] nodesByLabel 00:00:04.891 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.898 [Pipeline] httpRequest 00:00:04.902 HttpMethod: GET 00:00:04.902 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:04.903 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:04.904 Response Code: HTTP/1.1 200 OK 00:00:04.905 Success: Status code 200 is in the accepted range: 200,404 00:00:04.905 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.546 [Pipeline] sh 00:00:05.828 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.845 [Pipeline] httpRequest 00:00:05.850 HttpMethod: GET 00:00:05.850 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:05.851 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:05.865 Response Code: HTTP/1.1 200 OK 00:00:05.866 Success: Status code 200 is in the accepted range: 200,404 00:00:05.866 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:18.233 [Pipeline] sh 00:01:18.516 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:21.065 [Pipeline] sh 00:01:21.352 + git -C spdk log --oneline -n5 00:01:21.352 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:21.352 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:01:21.352 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:21.352 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:01:21.352 3b33f4333 test/nvme/cuse: Fix typo 00:01:21.374 [Pipeline] writeFile 00:01:21.393 [Pipeline] sh 00:01:21.677 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:21.688 [Pipeline] sh 00:01:21.972 + cat autorun-spdk.conf 00:01:21.972 SPDK_TEST_UNITTEST=1 00:01:21.972 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.972 SPDK_TEST_BLOCKDEV=1 00:01:21.972 SPDK_RUN_ASAN=1 00:01:21.972 SPDK_TEST_DAOS=1 00:01:21.972 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.979 RUN_NIGHTLY=1 00:01:21.981 [Pipeline] } 00:01:22.000 [Pipeline] // stage 00:01:22.016 [Pipeline] stage 00:01:22.018 [Pipeline] { (Run VM) 00:01:22.033 [Pipeline] sh 00:01:22.319 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:22.319 + echo 'Start stage prepare_nvme.sh' 00:01:22.319 Start stage prepare_nvme.sh 00:01:22.319 + [[ -n 9 ]] 00:01:22.319 + disk_prefix=ex9 00:01:22.319 + [[ -n /var/jenkins/workspace/centos7-vg-autotest ]] 00:01:22.319 + [[ -e /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf ]] 00:01:22.319 + source /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf 00:01:22.319 ++ SPDK_TEST_UNITTEST=1 00:01:22.319 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.319 ++ SPDK_TEST_BLOCKDEV=1 00:01:22.319 ++ SPDK_RUN_ASAN=1 00:01:22.319 ++ SPDK_TEST_DAOS=1 00:01:22.319 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.320 ++ RUN_NIGHTLY=1 00:01:22.320 + cd /var/jenkins/workspace/centos7-vg-autotest 00:01:22.320 + nvme_files=() 00:01:22.320 + declare -A nvme_files 00:01:22.320 + backend_dir=/var/lib/libvirt/images/backends 00:01:22.320 + nvme_files['nvme.img']=5G 00:01:22.320 + nvme_files['nvme-cmb.img']=5G 00:01:22.320 + nvme_files['nvme-multi0.img']=4G 00:01:22.320 + nvme_files['nvme-multi1.img']=4G 00:01:22.320 + nvme_files['nvme-multi2.img']=4G 00:01:22.320 + nvme_files['nvme-openstack.img']=8G 00:01:22.320 + nvme_files['nvme-zns.img']=5G 00:01:22.320 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:22.320 + (( SPDK_TEST_FTL == 1 )) 00:01:22.320 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:22.320 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:22.320 + for nvme in "${!nvme_files[@]}" 00:01:22.320 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:22.320 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.320 + for nvme in "${!nvme_files[@]}" 00:01:22.320 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:22.578 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.578 + for nvme in "${!nvme_files[@]}" 00:01:22.578 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:22.578 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:22.578 + for nvme in "${!nvme_files[@]}" 00:01:22.578 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:22.838 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.838 + for nvme in "${!nvme_files[@]}" 00:01:22.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:23.097 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.097 + for nvme in "${!nvme_files[@]}" 00:01:23.097 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:23.097 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.097 + for nvme in "${!nvme_files[@]}" 00:01:23.097 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:23.357 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.357 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:23.357 + echo 'End stage prepare_nvme.sh' 00:01:23.357 End stage prepare_nvme.sh 00:01:23.368 [Pipeline] sh 00:01:23.650 + DISTRO=centos7 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:23.650 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme.img -H -a -v -f centos7 00:01:23.650 00:01:23.650 DIR=/var/jenkins/workspace/centos7-vg-autotest/spdk/scripts/vagrant 00:01:23.650 SPDK_DIR=/var/jenkins/workspace/centos7-vg-autotest/spdk 00:01:23.650 VAGRANT_TARGET=/var/jenkins/workspace/centos7-vg-autotest 00:01:23.650 HELP=0 00:01:23.650 DRY_RUN=0 00:01:23.650 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme.img, 00:01:23.650 NVME_DISKS_TYPE=nvme, 00:01:23.650 NVME_AUTO_CREATE=0 00:01:23.650 NVME_DISKS_NAMESPACES=, 00:01:23.650 NVME_CMB=, 00:01:23.650 NVME_PMR=, 00:01:23.650 NVME_ZNS=, 00:01:23.650 NVME_MS=, 00:01:23.650 NVME_FDP=, 00:01:23.650 SPDK_VAGRANT_DISTRO=centos7 00:01:23.650 SPDK_VAGRANT_VMCPU=10 00:01:23.650 SPDK_VAGRANT_VMRAM=12288 00:01:23.650 SPDK_VAGRANT_PROVIDER=libvirt 00:01:23.650 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:23.650 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:23.650 SPDK_OPENSTACK_NETWORK=0 00:01:23.650 VAGRANT_PACKAGE_BOX=0 00:01:23.650 VAGRANTFILE=/var/jenkins/workspace/centos7-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:23.650 FORCE_DISTRO=true 00:01:23.650 VAGRANT_BOX_VERSION= 00:01:23.650 EXTRA_VAGRANTFILES= 00:01:23.650 NIC_MODEL=e1000 00:01:23.650 00:01:23.650 mkdir: created directory '/var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt' 00:01:23.650 /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt /var/jenkins/workspace/centos7-vg-autotest 00:01:26.188 Bringing machine 'default' up with 'libvirt' provider... 00:01:26.757 ==> default: Creating image (snapshot of base box volume). 00:01:26.757 ==> default: Creating domain with the following settings... 00:01:26.757 ==> default: -- Name: centos7-7.8.2003-1711172311-2200_default_1715747920_8cfe36360f2798baede9 00:01:26.757 ==> default: -- Domain type: kvm 00:01:26.757 ==> default: -- Cpus: 10 00:01:26.757 ==> default: -- Feature: acpi 00:01:26.757 ==> default: -- Feature: apic 00:01:26.757 ==> default: -- Feature: pae 00:01:26.757 ==> default: -- Memory: 12288M 00:01:26.757 ==> default: -- Memory Backing: hugepages: 00:01:26.757 ==> default: -- Management MAC: 00:01:26.757 ==> default: -- Loader: 00:01:26.757 ==> default: -- Nvram: 00:01:26.757 ==> default: -- Base box: spdk/centos7 00:01:26.757 ==> default: -- Storage pool: default 00:01:26.757 ==> default: -- Image: /var/lib/libvirt/images/centos7-7.8.2003-1711172311-2200_default_1715747920_8cfe36360f2798baede9.img (20G) 00:01:26.757 ==> default: -- Volume Cache: default 00:01:26.757 ==> default: -- Kernel: 00:01:26.757 ==> default: -- Initrd: 00:01:26.757 ==> default: -- Graphics Type: vnc 00:01:26.757 ==> default: -- Graphics Port: -1 00:01:26.758 ==> default: -- Graphics IP: 127.0.0.1 00:01:26.758 ==> default: -- Graphics Password: Not defined 00:01:26.758 ==> default: -- Video Type: cirrus 00:01:26.758 ==> default: -- Video VRAM: 9216 00:01:26.758 ==> default: -- Sound Type: 00:01:26.758 ==> default: -- Keymap: en-us 00:01:26.758 ==> default: -- TPM Path: 00:01:26.758 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:26.758 ==> default: -- Command line args: 00:01:26.758 ==> default: -> value=-device, 00:01:26.758 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:26.758 ==> default: -> value=-drive, 00:01:26.758 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-0-drive0, 00:01:26.758 ==> default: -> value=-device, 00:01:26.758 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.017 ==> default: Creating shared folders metadata... 00:01:27.017 ==> default: Starting domain. 00:01:28.921 ==> default: Waiting for domain to get an IP address... 00:01:41.195 ==> default: Waiting for SSH to become available... 00:01:42.573 ==> default: Configuring and enabling network interfaces... 00:01:46.764 default: SSH address: 192.168.121.129:22 00:01:46.764 default: SSH username: vagrant 00:01:46.764 default: SSH auth method: private key 00:01:47.333 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.470 ==> default: Mounting SSHFS shared folder... 00:01:56.847 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output => /home/vagrant/spdk_repo/output 00:01:56.847 ==> default: Checking Mount.. 00:01:57.105 ==> default: Folder Successfully Mounted! 00:01:57.105 ==> default: Running provisioner: file... 00:01:57.673 default: ~/.gitconfig => .gitconfig 00:01:57.932 00:01:57.932 SUCCESS! 00:01:57.932 00:01:57.932 cd to /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt and type "vagrant ssh" to use. 00:01:57.932 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:57.932 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt" to destroy all trace of vm. 00:01:57.932 00:01:57.941 [Pipeline] } 00:01:57.958 [Pipeline] // stage 00:01:57.966 [Pipeline] dir 00:01:57.967 Running in /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt 00:01:57.968 [Pipeline] { 00:01:57.981 [Pipeline] catchError 00:01:57.982 [Pipeline] { 00:01:57.996 [Pipeline] sh 00:01:58.276 + vagrant ssh-config --host vagrant 00:01:58.276 + sed -ne /^Host/,$p 00:01:58.276 + tee ssh_conf 00:02:01.560 Host vagrant 00:02:01.560 HostName 192.168.121.129 00:02:01.560 User vagrant 00:02:01.560 Port 22 00:02:01.560 UserKnownHostsFile /dev/null 00:02:01.560 StrictHostKeyChecking no 00:02:01.560 PasswordAuthentication no 00:02:01.560 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-centos7/7.8.2003-1711172311-2200/libvirt/centos7 00:02:01.560 IdentitiesOnly yes 00:02:01.560 LogLevel FATAL 00:02:01.560 ForwardAgent yes 00:02:01.560 ForwardX11 yes 00:02:01.560 00:02:01.576 [Pipeline] withEnv 00:02:01.579 [Pipeline] { 00:02:01.596 [Pipeline] sh 00:02:01.875 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:01.875 source /etc/os-release 00:02:01.875 [[ -e /image.version ]] && img=$(< /image.version) 00:02:01.875 # Minimal, systemd-like check. 00:02:01.875 if [[ -e /.dockerenv ]]; then 00:02:01.875 # Clear garbage from the node's name: 00:02:01.875 # agt-er_autotest_547-896 -> autotest_547-896 00:02:01.875 # $HOSTNAME is the actual container id 00:02:01.875 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:01.875 if mountpoint -q /etc/hostname; then 00:02:01.875 # We can assume this is a mount from a host where container is running, 00:02:01.875 # so fetch its hostname to easily identify the target swarm worker. 00:02:01.875 container="$(< /etc/hostname) ($agent)" 00:02:01.875 else 00:02:01.875 # Fallback 00:02:01.875 container=$agent 00:02:01.875 fi 00:02:01.875 fi 00:02:01.875 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:01.875 00:02:01.886 [Pipeline] } 00:02:01.904 [Pipeline] // withEnv 00:02:01.911 [Pipeline] setCustomBuildProperty 00:02:01.922 [Pipeline] stage 00:02:01.923 [Pipeline] { (Tests) 00:02:01.934 [Pipeline] sh 00:02:02.231 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:02.245 [Pipeline] timeout 00:02:02.245 Timeout set to expire in 1 hr 0 min 00:02:02.247 [Pipeline] { 00:02:02.261 [Pipeline] sh 00:02:02.538 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:03.106 HEAD is now at 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:02:03.118 [Pipeline] sh 00:02:03.398 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:03.398 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:03.413 [Pipeline] sh 00:02:03.692 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:03.708 [Pipeline] sh 00:02:03.989 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:02:03.989 ++ readlink -f spdk_repo 00:02:03.989 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:03.989 + [[ -n /home/vagrant/spdk_repo ]] 00:02:03.989 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:03.989 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:03.989 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:03.989 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:03.989 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:03.989 + cd /home/vagrant/spdk_repo 00:02:03.989 + source /etc/os-release 00:02:03.989 ++ NAME='CentOS Linux' 00:02:03.989 ++ VERSION='7 (Core)' 00:02:03.989 ++ ID=centos 00:02:03.989 ++ ID_LIKE='rhel fedora' 00:02:03.989 ++ VERSION_ID=7 00:02:03.989 ++ PRETTY_NAME='CentOS Linux 7 (Core)' 00:02:03.989 ++ ANSI_COLOR='0;31' 00:02:03.989 ++ CPE_NAME=cpe:/o:centos:centos:7 00:02:03.989 ++ HOME_URL=https://www.centos.org/ 00:02:03.989 ++ BUG_REPORT_URL=https://bugs.centos.org/ 00:02:03.989 ++ CENTOS_MANTISBT_PROJECT=CentOS-7 00:02:03.989 ++ CENTOS_MANTISBT_PROJECT_VERSION=7 00:02:03.989 ++ REDHAT_SUPPORT_PRODUCT=centos 00:02:03.989 ++ REDHAT_SUPPORT_PRODUCT_VERSION=7 00:02:03.989 + uname -a 00:02:03.989 Linux centos7-cloud-1711172311-2200 3.10.0-1160.114.2.el7.x86_64 #1 SMP Wed Mar 20 15:54:52 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:03.989 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.989 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:04.248 Hugepages 00:02:04.248 node hugesize free / total 00:02:04.248 node0 1048576kB 0 / 0 00:02:04.248 node0 2048kB 0 / 0 00:02:04.248 00:02:04.248 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:04.248 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:04.248 NVMe 0000:00:06.0 1b36 0010 0 nvme nvme0 nvme0n1 00:02:04.248 + rm -f /tmp/spdk-ld-path 00:02:04.248 + source autorun-spdk.conf 00:02:04.248 ++ SPDK_TEST_UNITTEST=1 00:02:04.248 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.248 ++ SPDK_TEST_BLOCKDEV=1 00:02:04.248 ++ SPDK_RUN_ASAN=1 00:02:04.248 ++ SPDK_TEST_DAOS=1 00:02:04.248 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.248 ++ RUN_NIGHTLY=1 00:02:04.248 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:04.248 + [[ -n '' ]] 00:02:04.248 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:04.248 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:04.248 + for M in /var/spdk/build-*-manifest.txt 00:02:04.248 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:04.248 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.248 + for M in /var/spdk/build-*-manifest.txt 00:02:04.248 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:04.248 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.248 ++ uname 00:02:04.248 + [[ Linux == \L\i\n\u\x ]] 00:02:04.248 + sudo dmesg -T 00:02:04.248 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:04.508 + sudo dmesg --clear 00:02:04.508 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:04.508 + dmesg_pid=2721 00:02:04.508 + [[ CentOS Linux == FreeBSD ]] 00:02:04.508 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.508 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.508 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:04.508 + [[ -x /usr/src/fio-static/fio ]] 00:02:04.508 + sudo dmesg -Tw 00:02:04.508 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:04.508 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:04.508 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:04.508 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:04.508 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:04.508 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:04.508 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:04.508 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.508 Test configuration: 00:02:04.508 SPDK_TEST_UNITTEST=1 00:02:04.508 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.508 SPDK_TEST_BLOCKDEV=1 00:02:04.508 SPDK_RUN_ASAN=1 00:02:04.508 SPDK_TEST_DAOS=1 00:02:04.508 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.508 RUN_NIGHTLY=1 04:39:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:04.508 04:39:17 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:04.508 04:39:17 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:04.508 04:39:17 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:04.508 04:39:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:04.508 04:39:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:04.508 04:39:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:04.508 04:39:17 -- paths/export.sh@5 -- $ export PATH 00:02:04.508 04:39:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:04.508 04:39:17 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:04.508 04:39:17 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:04.508 04:39:17 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715747957.XXXXXX 00:02:04.508 04:39:17 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715747957.zG7B1k 00:02:04.508 04:39:17 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:04.508 04:39:17 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:02:04.508 04:39:17 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:04.508 04:39:17 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:04.508 04:39:17 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:04.508 04:39:17 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:04.508 04:39:17 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:04.508 04:39:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.508 04:39:17 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos' 00:02:04.508 04:39:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.508 04:39:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.508 04:39:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:04.508 04:39:17 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.508 Wed May 15 04:39:17 UTC 2024 00:02:04.508 04:39:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.508 LTS-24-g36faa8c31 00:02:04.508 04:39:17 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:04.508 04:39:17 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:04.508 04:39:17 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:04.508 04:39:17 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:04.508 04:39:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.508 ************************************ 00:02:04.508 START TEST asan 00:02:04.508 ************************************ 00:02:04.508 using asan 00:02:04.508 04:39:17 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:04.508 00:02:04.508 real 0m0.000s 00:02:04.508 user 0m0.000s 00:02:04.508 sys 0m0.000s 00:02:04.508 04:39:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:04.508 04:39:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.508 ************************************ 00:02:04.508 END TEST asan 00:02:04.508 ************************************ 00:02:04.508 04:39:17 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:04.508 04:39:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:04.508 04:39:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:04.508 04:39:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:04.508 04:39:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:04.508 04:39:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:04.508 04:39:17 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:04.508 04:39:17 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:04.508 04:39:17 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:02:04.508 04:39:17 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:04.508 04:39:17 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:04.508 04:39:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.768 ************************************ 00:02:04.768 START TEST unittest_build 00:02:04.768 ************************************ 00:02:04.768 04:39:17 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:02:04.768 04:39:17 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos --without-shared 00:02:04.768 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:04.768 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:05.027 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:02:05.286 Using 'verbs' RDMA provider 00:02:05.855 WARNING: ISA-L & DPDK crypto cannot be used as nasm ver must be 2.14 or newer. 00:02:05.855 Without ISA-L, there is no software support for crypto or compression, 00:02:05.855 so these features will be disabled. 00:02:06.114 Creating mk/config.mk...done. 00:02:06.114 Creating mk/cc.flags.mk...done. 00:02:06.114 Type 'make' to build. 00:02:06.114 04:39:19 -- common/autobuild_common.sh@403 -- $ make -j10 00:02:06.373 make[1]: Nothing to be done for 'all'. 00:02:11.643 The Meson build system 00:02:11.643 Version: 0.61.5 00:02:11.643 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:11.643 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:11.643 Build type: native build 00:02:11.643 Program cat found: YES (/bin/cat) 00:02:11.643 Project name: DPDK 00:02:11.643 Project version: 23.11.0 00:02:11.643 C compiler for the host machine: cc (gcc 10.2.1 "cc (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)") 00:02:11.643 C linker for the host machine: cc ld.bfd 2.35-5 00:02:11.643 Host machine cpu family: x86_64 00:02:11.643 Host machine cpu: x86_64 00:02:11.643 Message: ## Building in Developer Mode ## 00:02:11.643 Program pkg-config found: YES (/bin/pkg-config) 00:02:11.643 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:11.643 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.643 Program python3 found: YES (/usr/bin/python3) 00:02:11.643 Program cat found: YES (/bin/cat) 00:02:11.643 Compiler for C supports arguments -march=native: YES 00:02:11.643 Checking for size of "void *" : 8 00:02:11.643 Checking for size of "void *" : 8 00:02:11.643 Library m found: YES 00:02:11.643 Library numa found: YES 00:02:11.643 Has header "numaif.h" : YES 00:02:11.643 Library fdt found: NO 00:02:11.643 Library execinfo found: NO 00:02:11.643 Has header "execinfo.h" : YES 00:02:11.643 Found pkg-config: /bin/pkg-config (0.27.1) 00:02:11.643 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.643 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.643 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.643 Run-time dependency openssl found: YES 1.0.2k 00:02:11.643 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:11.643 Library pcap found: NO 00:02:11.643 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.643 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.643 Compiler for C supports arguments -Wformat: YES 00:02:11.643 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.643 Compiler for C supports arguments -Wformat-security: NO 00:02:11.643 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.643 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.643 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.643 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.643 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.643 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.643 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.643 Compiler for C supports arguments -Wundef: YES 00:02:11.643 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.643 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.643 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.643 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.643 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.643 Program objdump found: YES (/bin/objdump) 00:02:11.643 Compiler for C supports arguments -mavx512f: YES 00:02:11.643 Checking if "AVX512 checking" compiles: YES 00:02:11.643 Fetching value of define "__SSE4_2__" : 1 00:02:11.643 Fetching value of define "__AES__" : 1 00:02:11.643 Fetching value of define "__AVX__" : 1 00:02:11.643 Fetching value of define "__AVX2__" : 1 00:02:11.643 Fetching value of define "__AVX512BW__" : 1 00:02:11.643 Fetching value of define "__AVX512CD__" : 1 00:02:11.643 Fetching value of define "__AVX512DQ__" : 1 00:02:11.643 Fetching value of define "__AVX512F__" : 1 00:02:11.643 Fetching value of define "__AVX512VL__" : 1 00:02:11.643 Fetching value of define "__PCLMUL__" : 1 00:02:11.643 Fetching value of define "__RDRND__" : 1 00:02:11.643 Fetching value of define "__RDSEED__" : 1 00:02:11.643 Fetching value of define "__VPCLMULQDQ__" : 00:02:11.643 Fetching value of define "__znver1__" : 00:02:11.643 Fetching value of define "__znver2__" : 00:02:11.643 Fetching value of define "__znver3__" : 00:02:11.643 Fetching value of define "__znver4__" : 00:02:11.643 Library asan found: YES 00:02:11.643 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.643 Message: lib/log: Defining dependency "log" 00:02:11.643 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.643 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.643 Library rt found: YES 00:02:11.643 Checking for function "getentropy" : NO 00:02:11.643 Message: lib/eal: Defining dependency "eal" 00:02:11.643 Message: lib/ring: Defining dependency "ring" 00:02:11.643 Message: lib/rcu: Defining dependency "rcu" 00:02:11.643 Message: lib/mempool: Defining dependency "mempool" 00:02:11.643 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.643 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.643 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.643 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.606 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.606 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.606 Fetching value of define "__VPCLMULQDQ__" : (cached) 00:02:12.606 Compiler for C supports arguments -mpclmul: YES 00:02:12.606 Compiler for C supports arguments -maes: YES 00:02:12.606 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.606 Compiler for C supports arguments -mavx512bw: YES 00:02:12.606 Compiler for C supports arguments -mavx512dq: YES 00:02:12.607 Compiler for C supports arguments -mavx512vl: YES 00:02:12.607 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.607 Compiler for C supports arguments -mavx2: YES 00:02:12.607 Compiler for C supports arguments -mavx: YES 00:02:12.607 Message: lib/net: Defining dependency "net" 00:02:12.607 Message: lib/meter: Defining dependency "meter" 00:02:12.607 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.607 Message: lib/pci: Defining dependency "pci" 00:02:12.607 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.607 Message: lib/hash: Defining dependency "hash" 00:02:12.607 Message: lib/timer: Defining dependency "timer" 00:02:12.607 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.607 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.607 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.607 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.607 Message: lib/power: Defining dependency "power" 00:02:12.607 Message: lib/reorder: Defining dependency "reorder" 00:02:12.607 Message: lib/security: Defining dependency "security" 00:02:12.607 Has header "linux/userfaultfd.h" : YES 00:02:12.607 Has header "linux/vduse.h" : NO 00:02:12.607 Message: lib/vhost: Defining dependency "vhost" 00:02:12.607 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.607 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.607 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.607 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.607 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.607 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.607 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.607 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.607 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.607 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.607 Program doxygen found: YES (/bin/doxygen) 00:02:12.607 Configuring doxy-api-html.conf using configuration 00:02:12.607 Configuring doxy-api-man.conf using configuration 00:02:12.607 Program mandb found: YES (/bin/mandb) 00:02:12.607 Program sphinx-build found: NO 00:02:12.607 Configuring rte_build_config.h using configuration 00:02:12.607 Message: 00:02:12.607 ================= 00:02:12.607 Applications Enabled 00:02:12.607 ================= 00:02:12.607 00:02:12.607 apps: 00:02:12.607 00:02:12.607 00:02:12.607 Message: 00:02:12.607 ================= 00:02:12.607 Libraries Enabled 00:02:12.607 ================= 00:02:12.607 00:02:12.607 libs: 00:02:12.607 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.607 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.607 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.607 00:02:12.607 Message: 00:02:12.607 =============== 00:02:12.607 Drivers Enabled 00:02:12.607 =============== 00:02:12.607 00:02:12.607 common: 00:02:12.607 00:02:12.607 bus: 00:02:12.607 pci, vdev, 00:02:12.607 mempool: 00:02:12.607 ring, 00:02:12.607 dma: 00:02:12.607 00:02:12.607 net: 00:02:12.607 00:02:12.607 crypto: 00:02:12.607 00:02:12.607 compress: 00:02:12.607 00:02:12.607 vdpa: 00:02:12.607 00:02:12.607 00:02:12.607 Message: 00:02:12.607 ================= 00:02:12.607 Content Skipped 00:02:12.607 ================= 00:02:12.607 00:02:12.607 apps: 00:02:12.607 dumpcap: explicitly disabled via build config 00:02:12.607 graph: explicitly disabled via build config 00:02:12.607 pdump: explicitly disabled via build config 00:02:12.607 proc-info: explicitly disabled via build config 00:02:12.607 test-acl: explicitly disabled via build config 00:02:12.607 test-bbdev: explicitly disabled via build config 00:02:12.607 test-cmdline: explicitly disabled via build config 00:02:12.607 test-compress-perf: explicitly disabled via build config 00:02:12.607 test-crypto-perf: explicitly disabled via build config 00:02:12.607 test-dma-perf: explicitly disabled via build config 00:02:12.607 test-eventdev: explicitly disabled via build config 00:02:12.607 test-fib: explicitly disabled via build config 00:02:12.607 test-flow-perf: explicitly disabled via build config 00:02:12.607 test-gpudev: explicitly disabled via build config 00:02:12.607 test-mldev: explicitly disabled via build config 00:02:12.607 test-pipeline: explicitly disabled via build config 00:02:12.607 test-pmd: explicitly disabled via build config 00:02:12.607 test-regex: explicitly disabled via build config 00:02:12.607 test-sad: explicitly disabled via build config 00:02:12.607 test-security-perf: explicitly disabled via build config 00:02:12.607 00:02:12.607 libs: 00:02:12.607 metrics: explicitly disabled via build config 00:02:12.607 acl: explicitly disabled via build config 00:02:12.607 bbdev: explicitly disabled via build config 00:02:12.607 bitratestats: explicitly disabled via build config 00:02:12.607 bpf: explicitly disabled via build config 00:02:12.607 cfgfile: explicitly disabled via build config 00:02:12.607 distributor: explicitly disabled via build config 00:02:12.607 efd: explicitly disabled via build config 00:02:12.607 eventdev: explicitly disabled via build config 00:02:12.607 dispatcher: explicitly disabled via build config 00:02:12.607 gpudev: explicitly disabled via build config 00:02:12.607 gro: explicitly disabled via build config 00:02:12.607 gso: explicitly disabled via build config 00:02:12.607 ip_frag: explicitly disabled via build config 00:02:12.607 jobstats: explicitly disabled via build config 00:02:12.607 latencystats: explicitly disabled via build config 00:02:12.607 lpm: explicitly disabled via build config 00:02:12.607 member: explicitly disabled via build config 00:02:12.607 pcapng: explicitly disabled via build config 00:02:12.607 rawdev: explicitly disabled via build config 00:02:12.607 regexdev: explicitly disabled via build config 00:02:12.607 mldev: explicitly disabled via build config 00:02:12.607 rib: explicitly disabled via build config 00:02:12.607 sched: explicitly disabled via build config 00:02:12.607 stack: explicitly disabled via build config 00:02:12.607 ipsec: explicitly disabled via build config 00:02:12.607 pdcp: explicitly disabled via build config 00:02:12.607 fib: explicitly disabled via build config 00:02:12.607 port: explicitly disabled via build config 00:02:12.607 pdump: explicitly disabled via build config 00:02:12.607 table: explicitly disabled via build config 00:02:12.607 pipeline: explicitly disabled via build config 00:02:12.607 graph: explicitly disabled via build config 00:02:12.607 node: explicitly disabled via build config 00:02:12.607 00:02:12.607 drivers: 00:02:12.607 common/cpt: not in enabled drivers build config 00:02:12.607 common/dpaax: not in enabled drivers build config 00:02:12.607 common/iavf: not in enabled drivers build config 00:02:12.607 common/idpf: not in enabled drivers build config 00:02:12.607 common/mvep: not in enabled drivers build config 00:02:12.607 common/octeontx: not in enabled drivers build config 00:02:12.607 bus/auxiliary: not in enabled drivers build config 00:02:12.607 bus/cdx: not in enabled drivers build config 00:02:12.607 bus/dpaa: not in enabled drivers build config 00:02:12.607 bus/fslmc: not in enabled drivers build config 00:02:12.607 bus/ifpga: not in enabled drivers build config 00:02:12.607 bus/platform: not in enabled drivers build config 00:02:12.607 bus/vmbus: not in enabled drivers build config 00:02:12.607 common/cnxk: not in enabled drivers build config 00:02:12.607 common/mlx5: not in enabled drivers build config 00:02:12.607 common/nfp: not in enabled drivers build config 00:02:12.607 common/qat: not in enabled drivers build config 00:02:12.607 common/sfc_efx: not in enabled drivers build config 00:02:12.607 mempool/bucket: not in enabled drivers build config 00:02:12.607 mempool/cnxk: not in enabled drivers build config 00:02:12.607 mempool/dpaa: not in enabled drivers build config 00:02:12.607 mempool/dpaa2: not in enabled drivers build config 00:02:12.607 mempool/octeontx: not in enabled drivers build config 00:02:12.607 mempool/stack: not in enabled drivers build config 00:02:12.607 dma/cnxk: not in enabled drivers build config 00:02:12.607 dma/dpaa: not in enabled drivers build config 00:02:12.607 dma/dpaa2: not in enabled drivers build config 00:02:12.607 dma/hisilicon: not in enabled drivers build config 00:02:12.607 dma/idxd: not in enabled drivers build config 00:02:12.607 dma/ioat: not in enabled drivers build config 00:02:12.607 dma/skeleton: not in enabled drivers build config 00:02:12.607 net/af_packet: not in enabled drivers build config 00:02:12.607 net/af_xdp: not in enabled drivers build config 00:02:12.607 net/ark: not in enabled drivers build config 00:02:12.607 net/atlantic: not in enabled drivers build config 00:02:12.607 net/avp: not in enabled drivers build config 00:02:12.607 net/axgbe: not in enabled drivers build config 00:02:12.607 net/bnx2x: not in enabled drivers build config 00:02:12.607 net/bnxt: not in enabled drivers build config 00:02:12.607 net/bonding: not in enabled drivers build config 00:02:12.607 net/cnxk: not in enabled drivers build config 00:02:12.607 net/cpfl: not in enabled drivers build config 00:02:12.607 net/cxgbe: not in enabled drivers build config 00:02:12.607 net/dpaa: not in enabled drivers build config 00:02:12.607 net/dpaa2: not in enabled drivers build config 00:02:12.607 net/e1000: not in enabled drivers build config 00:02:12.607 net/ena: not in enabled drivers build config 00:02:12.607 net/enetc: not in enabled drivers build config 00:02:12.607 net/enetfec: not in enabled drivers build config 00:02:12.607 net/enic: not in enabled drivers build config 00:02:12.607 net/failsafe: not in enabled drivers build config 00:02:12.607 net/fm10k: not in enabled drivers build config 00:02:12.607 net/gve: not in enabled drivers build config 00:02:12.607 net/hinic: not in enabled drivers build config 00:02:12.607 net/hns3: not in enabled drivers build config 00:02:12.607 net/i40e: not in enabled drivers build config 00:02:12.607 net/iavf: not in enabled drivers build config 00:02:12.607 net/ice: not in enabled drivers build config 00:02:12.607 net/idpf: not in enabled drivers build config 00:02:12.607 net/igc: not in enabled drivers build config 00:02:12.607 net/ionic: not in enabled drivers build config 00:02:12.607 net/ipn3ke: not in enabled drivers build config 00:02:12.607 net/ixgbe: not in enabled drivers build config 00:02:12.607 net/mana: not in enabled drivers build config 00:02:12.607 net/memif: not in enabled drivers build config 00:02:12.607 net/mlx4: not in enabled drivers build config 00:02:12.607 net/mlx5: not in enabled drivers build config 00:02:12.607 net/mvneta: not in enabled drivers build config 00:02:12.608 net/mvpp2: not in enabled drivers build config 00:02:12.608 net/netvsc: not in enabled drivers build config 00:02:12.608 net/nfb: not in enabled drivers build config 00:02:12.608 net/nfp: not in enabled drivers build config 00:02:12.608 net/ngbe: not in enabled drivers build config 00:02:12.608 net/null: not in enabled drivers build config 00:02:12.608 net/octeontx: not in enabled drivers build config 00:02:12.608 net/octeon_ep: not in enabled drivers build config 00:02:12.608 net/pcap: not in enabled drivers build config 00:02:12.608 net/pfe: not in enabled drivers build config 00:02:12.608 net/qede: not in enabled drivers build config 00:02:12.608 net/ring: not in enabled drivers build config 00:02:12.608 net/sfc: not in enabled drivers build config 00:02:12.608 net/softnic: not in enabled drivers build config 00:02:12.608 net/tap: not in enabled drivers build config 00:02:12.608 net/thunderx: not in enabled drivers build config 00:02:12.608 net/txgbe: not in enabled drivers build config 00:02:12.608 net/vdev_netvsc: not in enabled drivers build config 00:02:12.608 net/vhost: not in enabled drivers build config 00:02:12.608 net/virtio: not in enabled drivers build config 00:02:12.608 net/vmxnet3: not in enabled drivers build config 00:02:12.608 raw/*: missing internal dependency, "rawdev" 00:02:12.608 crypto/armv8: not in enabled drivers build config 00:02:12.608 crypto/bcmfs: not in enabled drivers build config 00:02:12.608 crypto/caam_jr: not in enabled drivers build config 00:02:12.608 crypto/ccp: not in enabled drivers build config 00:02:12.608 crypto/cnxk: not in enabled drivers build config 00:02:12.608 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.608 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.608 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.608 crypto/mlx5: not in enabled drivers build config 00:02:12.608 crypto/mvsam: not in enabled drivers build config 00:02:12.608 crypto/nitrox: not in enabled drivers build config 00:02:12.608 crypto/null: not in enabled drivers build config 00:02:12.608 crypto/octeontx: not in enabled drivers build config 00:02:12.608 crypto/openssl: not in enabled drivers build config 00:02:12.608 crypto/scheduler: not in enabled drivers build config 00:02:12.608 crypto/uadk: not in enabled drivers build config 00:02:12.608 crypto/virtio: not in enabled drivers build config 00:02:12.608 compress/isal: not in enabled drivers build config 00:02:12.608 compress/mlx5: not in enabled drivers build config 00:02:12.608 compress/octeontx: not in enabled drivers build config 00:02:12.608 compress/zlib: not in enabled drivers build config 00:02:12.608 regex/*: missing internal dependency, "regexdev" 00:02:12.608 ml/*: missing internal dependency, "mldev" 00:02:12.608 vdpa/ifc: not in enabled drivers build config 00:02:12.608 vdpa/mlx5: not in enabled drivers build config 00:02:12.608 vdpa/nfp: not in enabled drivers build config 00:02:12.608 vdpa/sfc: not in enabled drivers build config 00:02:12.608 event/*: missing internal dependency, "eventdev" 00:02:12.608 baseband/*: missing internal dependency, "bbdev" 00:02:12.608 gpu/*: missing internal dependency, "gpudev" 00:02:12.608 00:02:12.608 00:02:13.177 Build targets in project: 85 00:02:13.177 00:02:13.177 DPDK 23.11.0 00:02:13.177 00:02:13.177 User defined options 00:02:13.177 buildtype : debug 00:02:13.177 default_library : static 00:02:13.177 libdir : lib 00:02:13.177 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.177 b_sanitize : address 00:02:13.177 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:02:13.177 c_link_args : 00:02:13.177 cpu_instruction_set: native 00:02:13.177 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:13.177 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:13.177 enable_docs : false 00:02:13.177 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:13.177 enable_kmods : false 00:02:13.177 tests : false 00:02:13.177 00:02:13.177 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.177 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:02:13.745 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:13.745 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.745 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:13.745 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:13.745 [4/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.745 [5/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.745 [6/264] Linking static target lib/librte_kvargs.a 00:02:13.745 [7/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.745 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.745 [9/264] Linking static target lib/librte_log.a 00:02:14.003 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.003 [11/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.003 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.003 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.003 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.003 [15/264] Linking static target lib/librte_telemetry.a 00:02:14.003 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.004 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.004 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.263 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.263 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.263 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.263 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.263 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.263 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.263 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.263 [26/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.522 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.522 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.522 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.522 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.522 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.522 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.522 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.522 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.522 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.522 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.522 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.522 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.780 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.780 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.780 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.780 [42/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.780 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.780 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.780 [45/264] Linking target lib/librte_log.so.24.0 00:02:14.780 [46/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.780 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.780 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.780 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.038 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.038 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.038 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.038 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.038 [54/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.038 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.038 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.038 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.038 [58/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.038 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.038 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.038 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.038 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.297 [63/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.297 [64/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:15.297 [65/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.297 [66/264] Linking target lib/librte_kvargs.so.24.0 00:02:15.297 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.297 [68/264] Linking target lib/librte_telemetry.so.24.0 00:02:15.297 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.297 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.297 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.297 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.297 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:15.297 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.297 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.297 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.297 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.297 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.555 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.555 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.555 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.555 [82/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.555 [83/264] Linking static target lib/librte_ring.a 00:02:15.555 [84/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.555 [85/264] Linking static target lib/librte_eal.a 00:02:15.555 [86/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:15.555 [87/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:15.555 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.813 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.813 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.813 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.813 [92/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.813 [93/264] Linking static target lib/librte_mempool.a 00:02:15.813 [94/264] Linking static target lib/librte_rcu.a 00:02:15.813 [95/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.813 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.072 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.072 [98/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:16.072 [99/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:16.072 [100/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.072 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.072 [102/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.072 [103/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.072 [104/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.330 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.330 [106/264] Linking static target lib/librte_net.a 00:02:16.330 [107/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.330 [108/264] Linking static target lib/librte_meter.a 00:02:16.330 [109/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:16.330 [110/264] Linking static target lib/librte_mbuf.a 00:02:16.330 [111/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.330 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.589 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.589 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.589 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.848 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.848 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.848 [118/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.848 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.848 [120/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.848 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.107 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:17.107 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.107 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.107 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.107 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.107 [127/264] Linking static target lib/librte_pci.a 00:02:17.366 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.366 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.366 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.366 [131/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.366 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.366 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.366 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.366 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.366 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.366 [137/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.366 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.366 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.366 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.366 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:17.366 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.366 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.624 [144/264] Linking static target lib/librte_cmdline.a 00:02:17.624 [145/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.624 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:17.624 [147/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.882 [148/264] Linking static target lib/librte_timer.a 00:02:17.882 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.882 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.882 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.882 [152/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.882 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.141 [154/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.141 [155/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:18.141 [156/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.141 [157/264] Linking static target lib/librte_compressdev.a 00:02:18.141 [158/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.141 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.141 [160/264] Linking static target lib/librte_dmadev.a 00:02:18.399 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.399 [162/264] Linking static target lib/librte_hash.a 00:02:18.399 [163/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.399 [164/264] Linking static target lib/librte_ethdev.a 00:02:18.399 [165/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.399 [166/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.399 [167/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:18.399 [168/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.399 [169/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:18.658 [170/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.658 [171/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:18.917 [172/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.917 [173/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:18.917 [174/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.917 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:18.917 [176/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.179 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.179 [178/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.179 [179/264] Linking static target lib/librte_cryptodev.a 00:02:19.179 [180/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.179 [181/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:19.179 [182/264] Linking static target lib/librte_power.a 00:02:19.180 [183/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.180 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:19.443 [185/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:19.443 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:19.443 [187/264] Linking static target lib/librte_reorder.a 00:02:19.444 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:19.444 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:19.444 [190/264] Linking static target lib/librte_security.a 00:02:19.703 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:19.962 [192/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.962 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.962 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:19.962 [195/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.221 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.221 [197/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.221 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:20.221 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.221 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.221 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.480 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.480 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:20.480 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.480 [205/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.480 [206/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.738 [207/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.738 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.738 [209/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.738 [210/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.738 [211/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.738 [212/264] Linking static target drivers/librte_bus_vdev.a 00:02:20.738 [213/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.738 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.738 [215/264] Linking static target drivers/librte_bus_pci.a 00:02:20.738 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.738 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.995 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.995 [219/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.995 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.995 [221/264] Linking static target drivers/librte_mempool_ring.a 00:02:21.254 [222/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.513 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.080 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.458 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.458 [226/264] Linking target lib/librte_eal.so.24.0 00:02:24.027 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:24.027 [228/264] Linking target lib/librte_meter.so.24.0 00:02:24.027 [229/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.027 [230/264] Linking target lib/librte_pci.so.24.0 00:02:24.027 [231/264] Linking target lib/librte_ring.so.24.0 00:02:24.027 [232/264] Linking target lib/librte_dmadev.so.24.0 00:02:24.027 [233/264] Linking target lib/librte_timer.so.24.0 00:02:24.027 [234/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:24.286 [235/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:24.286 [236/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:24.546 [237/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:24.546 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:24.546 [239/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:24.546 [240/264] Linking target lib/librte_rcu.so.24.0 00:02:24.546 [241/264] Linking target lib/librte_mempool.so.24.0 00:02:24.546 [242/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:24.804 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:25.062 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:25.062 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:25.062 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:25.630 [247/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.630 [248/264] Linking static target lib/librte_vhost.a 00:02:25.630 [249/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:25.630 [250/264] Linking target lib/librte_compressdev.so.24.0 00:02:25.630 [251/264] Linking target lib/librte_reorder.so.24.0 00:02:25.630 [252/264] Linking target lib/librte_cryptodev.so.24.0 00:02:25.630 [253/264] Linking target lib/librte_net.so.24.0 00:02:26.198 [254/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:26.198 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:26.198 [256/264] Linking target lib/librte_hash.so.24.0 00:02:26.198 [257/264] Linking target lib/librte_cmdline.so.24.0 00:02:26.198 [258/264] Linking target lib/librte_security.so.24.0 00:02:26.198 [259/264] Linking target lib/librte_ethdev.so.24.0 00:02:26.766 [260/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:26.766 [261/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:26.766 [262/264] Linking target lib/librte_power.so.24.0 00:02:27.703 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.703 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:27.703 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:02:29.607 CC lib/ut_mock/mock.o 00:02:29.607 CC lib/ut/ut.o 00:02:29.607 CC lib/log/log.o 00:02:29.607 CC lib/log/log_flags.o 00:02:29.607 CC lib/log/log_deprecated.o 00:02:29.607 LIB libspdk_ut_mock.a 00:02:29.607 LIB libspdk_ut.a 00:02:29.607 LIB libspdk_log.a 00:02:29.865 CXX lib/trace_parser/trace.o 00:02:29.865 CC lib/util/base64.o 00:02:29.865 CC lib/dma/dma.o 00:02:29.865 CC lib/util/bit_array.o 00:02:29.865 CC lib/ioat/ioat.o 00:02:29.865 CC lib/util/cpuset.o 00:02:29.865 CC lib/util/crc16.o 00:02:29.865 CC lib/util/crc32.o 00:02:29.865 CC lib/util/crc32c.o 00:02:29.865 CC lib/vfio_user/host/vfio_user_pci.o 00:02:30.122 CC lib/vfio_user/host/vfio_user.o 00:02:30.122 CC lib/util/crc32_ieee.o 00:02:30.122 LIB libspdk_dma.a 00:02:30.122 CC lib/util/crc64.o 00:02:30.122 CC lib/util/dif.o 00:02:30.122 CC lib/util/fd.o 00:02:30.122 LIB libspdk_ioat.a 00:02:30.122 CC lib/util/file.o 00:02:30.122 CC lib/util/hexlify.o 00:02:30.122 CC lib/util/iov.o 00:02:30.122 CC lib/util/math.o 00:02:30.122 LIB libspdk_vfio_user.a 00:02:30.122 CC lib/util/pipe.o 00:02:30.122 CC lib/util/strerror_tls.o 00:02:30.379 CC lib/util/string.o 00:02:30.379 CC lib/util/uuid.o 00:02:30.379 CC lib/util/fd_group.o 00:02:30.379 CC lib/util/xor.o 00:02:30.379 CC lib/util/zipf.o 00:02:30.379 LIB libspdk_util.a 00:02:30.636 CC lib/idxd/idxd.o 00:02:30.636 CC lib/rdma/common.o 00:02:30.636 CC lib/conf/conf.o 00:02:30.636 CC lib/json/json_parse.o 00:02:30.636 CC lib/vmd/vmd.o 00:02:30.636 CC lib/idxd/idxd_user.o 00:02:30.636 CC lib/env_dpdk/env.o 00:02:30.636 LIB libspdk_trace_parser.a 00:02:30.636 CC lib/rdma/rdma_verbs.o 00:02:30.636 CC lib/json/json_util.o 00:02:30.636 CC lib/json/json_write.o 00:02:30.893 LIB libspdk_conf.a 00:02:30.893 CC lib/env_dpdk/memory.o 00:02:30.893 CC lib/env_dpdk/pci.o 00:02:30.893 CC lib/vmd/led.o 00:02:30.893 CC lib/env_dpdk/init.o 00:02:30.893 CC lib/env_dpdk/threads.o 00:02:30.893 LIB libspdk_rdma.a 00:02:30.893 CC lib/env_dpdk/pci_ioat.o 00:02:30.893 LIB libspdk_json.a 00:02:30.893 CC lib/env_dpdk/pci_virtio.o 00:02:30.893 CC lib/env_dpdk/pci_vmd.o 00:02:30.893 LIB libspdk_vmd.a 00:02:30.893 LIB libspdk_idxd.a 00:02:30.893 CC lib/env_dpdk/pci_idxd.o 00:02:30.893 CC lib/env_dpdk/pci_event.o 00:02:30.893 CC lib/env_dpdk/sigbus_handler.o 00:02:31.152 CC lib/jsonrpc/jsonrpc_server.o 00:02:31.152 CC lib/env_dpdk/pci_dpdk.o 00:02:31.152 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:31.152 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:31.152 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:31.152 CC lib/jsonrpc/jsonrpc_client.o 00:02:31.152 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:31.411 LIB libspdk_jsonrpc.a 00:02:31.411 CC lib/rpc/rpc.o 00:02:31.411 LIB libspdk_env_dpdk.a 00:02:31.669 LIB libspdk_rpc.a 00:02:31.669 CC lib/trace/trace.o 00:02:31.669 CC lib/sock/sock.o 00:02:31.669 CC lib/trace/trace_flags.o 00:02:31.669 CC lib/sock/sock_rpc.o 00:02:31.669 CC lib/trace/trace_rpc.o 00:02:31.669 CC lib/notify/notify.o 00:02:31.669 CC lib/notify/notify_rpc.o 00:02:31.928 LIB libspdk_notify.a 00:02:31.928 LIB libspdk_trace.a 00:02:31.928 LIB libspdk_sock.a 00:02:32.186 CC lib/thread/thread.o 00:02:32.186 CC lib/thread/iobuf.o 00:02:32.186 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:32.186 CC lib/nvme/nvme_ctrlr.o 00:02:32.186 CC lib/nvme/nvme_fabric.o 00:02:32.186 CC lib/nvme/nvme_ns_cmd.o 00:02:32.186 CC lib/nvme/nvme_ns.o 00:02:32.186 CC lib/nvme/nvme_pcie_common.o 00:02:32.186 CC lib/nvme/nvme_pcie.o 00:02:32.186 CC lib/nvme/nvme_qpair.o 00:02:32.186 CC lib/nvme/nvme.o 00:02:32.754 CC lib/nvme/nvme_quirks.o 00:02:32.754 CC lib/nvme/nvme_transport.o 00:02:32.754 CC lib/nvme/nvme_discovery.o 00:02:32.754 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:32.754 LIB libspdk_thread.a 00:02:32.754 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:32.754 CC lib/nvme/nvme_tcp.o 00:02:32.754 CC lib/nvme/nvme_opal.o 00:02:32.754 CC lib/accel/accel.o 00:02:32.754 CC lib/nvme/nvme_io_msg.o 00:02:33.013 CC lib/nvme/nvme_poll_group.o 00:02:33.013 CC lib/nvme/nvme_zns.o 00:02:33.013 CC lib/accel/accel_rpc.o 00:02:33.013 CC lib/nvme/nvme_cuse.o 00:02:33.271 CC lib/blob/blobstore.o 00:02:33.271 CC lib/nvme/nvme_vfio_user.o 00:02:33.271 CC lib/accel/accel_sw.o 00:02:33.271 CC lib/blob/request.o 00:02:33.271 CC lib/nvme/nvme_rdma.o 00:02:33.271 CC lib/blob/zeroes.o 00:02:33.271 LIB libspdk_accel.a 00:02:33.271 CC lib/init/json_config.o 00:02:33.271 CC lib/blob/blob_bs_dev.o 00:02:33.271 CC lib/init/subsystem.o 00:02:33.529 CC lib/init/subsystem_rpc.o 00:02:33.529 CC lib/virtio/virtio.o 00:02:33.529 CC lib/init/rpc.o 00:02:33.529 CC lib/virtio/virtio_vhost_user.o 00:02:33.529 CC lib/virtio/virtio_vfio_user.o 00:02:33.529 CC lib/virtio/virtio_pci.o 00:02:33.529 LIB libspdk_init.a 00:02:33.788 CC lib/bdev/bdev.o 00:02:33.788 CC lib/event/app.o 00:02:33.788 CC lib/bdev/bdev_rpc.o 00:02:33.788 CC lib/event/reactor.o 00:02:33.788 LIB libspdk_virtio.a 00:02:33.788 CC lib/bdev/bdev_zone.o 00:02:33.788 CC lib/event/log_rpc.o 00:02:33.788 CC lib/event/app_rpc.o 00:02:33.788 CC lib/event/scheduler_static.o 00:02:33.788 CC lib/bdev/part.o 00:02:33.788 CC lib/bdev/scsi_nvme.o 00:02:34.049 LIB libspdk_nvme.a 00:02:34.049 LIB libspdk_event.a 00:02:34.645 LIB libspdk_blob.a 00:02:34.645 CC lib/blobfs/blobfs.o 00:02:34.645 CC lib/blobfs/tree.o 00:02:34.645 CC lib/lvol/lvol.o 00:02:35.212 LIB libspdk_bdev.a 00:02:35.212 LIB libspdk_blobfs.a 00:02:35.212 LIB libspdk_lvol.a 00:02:35.212 CC lib/scsi/dev.o 00:02:35.212 CC lib/nbd/nbd.o 00:02:35.212 CC lib/ftl/ftl_core.o 00:02:35.212 CC lib/nvmf/ctrlr.o 00:02:35.212 CC lib/scsi/lun.o 00:02:35.212 CC lib/ftl/ftl_init.o 00:02:35.212 CC lib/nbd/nbd_rpc.o 00:02:35.212 CC lib/nvmf/ctrlr_discovery.o 00:02:35.212 CC lib/scsi/port.o 00:02:35.212 CC lib/ftl/ftl_layout.o 00:02:35.212 CC lib/scsi/scsi.o 00:02:35.470 CC lib/nvmf/ctrlr_bdev.o 00:02:35.470 CC lib/scsi/scsi_bdev.o 00:02:35.470 CC lib/ftl/ftl_debug.o 00:02:35.470 CC lib/scsi/scsi_pr.o 00:02:35.470 CC lib/ftl/ftl_io.o 00:02:35.470 CC lib/scsi/scsi_rpc.o 00:02:35.470 CC lib/scsi/task.o 00:02:35.470 CC lib/nvmf/subsystem.o 00:02:35.470 LIB libspdk_nbd.a 00:02:35.470 CC lib/nvmf/nvmf.o 00:02:35.470 CC lib/ftl/ftl_sb.o 00:02:35.470 CC lib/ftl/ftl_l2p.o 00:02:35.728 CC lib/nvmf/nvmf_rpc.o 00:02:35.728 CC lib/nvmf/transport.o 00:02:35.728 CC lib/ftl/ftl_l2p_flat.o 00:02:35.728 LIB libspdk_scsi.a 00:02:35.728 CC lib/nvmf/tcp.o 00:02:35.728 CC lib/ftl/ftl_nv_cache.o 00:02:35.728 CC lib/ftl/ftl_band.o 00:02:35.728 CC lib/ftl/ftl_band_ops.o 00:02:35.728 CC lib/iscsi/conn.o 00:02:35.987 CC lib/nvmf/rdma.o 00:02:35.987 CC lib/ftl/ftl_writer.o 00:02:35.987 CC lib/ftl/ftl_rq.o 00:02:35.987 CC lib/ftl/ftl_reloc.o 00:02:35.987 CC lib/iscsi/init_grp.o 00:02:35.987 CC lib/iscsi/iscsi.o 00:02:36.246 CC lib/vhost/vhost.o 00:02:36.246 CC lib/ftl/ftl_l2p_cache.o 00:02:36.246 CC lib/ftl/ftl_p2l.o 00:02:36.246 CC lib/vhost/vhost_rpc.o 00:02:36.246 CC lib/iscsi/md5.o 00:02:36.246 CC lib/ftl/mngt/ftl_mngt.o 00:02:36.246 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:36.246 CC lib/vhost/vhost_scsi.o 00:02:36.503 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:36.503 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.503 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.503 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.503 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.503 CC lib/vhost/vhost_blk.o 00:02:36.503 CC lib/vhost/rte_vhost_user.o 00:02:36.503 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.503 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:36.762 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:36.762 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:36.762 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:36.762 CC lib/iscsi/param.o 00:02:36.762 CC lib/iscsi/portal_grp.o 00:02:36.762 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:36.762 CC lib/ftl/utils/ftl_conf.o 00:02:36.762 CC lib/iscsi/tgt_node.o 00:02:36.762 LIB libspdk_nvmf.a 00:02:36.762 CC lib/ftl/utils/ftl_md.o 00:02:37.020 CC lib/ftl/utils/ftl_mempool.o 00:02:37.020 CC lib/iscsi/iscsi_subsystem.o 00:02:37.020 CC lib/ftl/utils/ftl_bitmap.o 00:02:37.020 CC lib/ftl/utils/ftl_property.o 00:02:37.020 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:37.020 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:37.020 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:37.020 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:37.020 CC lib/iscsi/iscsi_rpc.o 00:02:37.020 CC lib/iscsi/task.o 00:02:37.020 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:37.020 LIB libspdk_vhost.a 00:02:37.020 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:37.278 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:37.278 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:37.278 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:37.278 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:37.278 CC lib/ftl/base/ftl_base_dev.o 00:02:37.278 CC lib/ftl/base/ftl_base_bdev.o 00:02:37.278 CC lib/ftl/ftl_trace.o 00:02:37.278 LIB libspdk_iscsi.a 00:02:37.536 LIB libspdk_ftl.a 00:02:37.794 CC module/env_dpdk/env_dpdk_rpc.o 00:02:37.794 CC module/accel/ioat/accel_ioat.o 00:02:37.794 CC module/accel/iaa/accel_iaa.o 00:02:37.794 CC module/accel/dsa/accel_dsa.o 00:02:37.794 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:37.794 CC module/accel/error/accel_error.o 00:02:37.794 CC module/sock/posix/posix.o 00:02:37.794 CC module/scheduler/gscheduler/gscheduler.o 00:02:37.794 CC module/blob/bdev/blob_bdev.o 00:02:37.794 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:37.794 LIB libspdk_env_dpdk_rpc.a 00:02:37.794 CC module/accel/dsa/accel_dsa_rpc.o 00:02:37.794 LIB libspdk_scheduler_gscheduler.a 00:02:37.794 LIB libspdk_scheduler_dpdk_governor.a 00:02:37.794 CC module/accel/ioat/accel_ioat_rpc.o 00:02:37.794 CC module/accel/error/accel_error_rpc.o 00:02:37.794 CC module/accel/iaa/accel_iaa_rpc.o 00:02:38.052 LIB libspdk_scheduler_dynamic.a 00:02:38.052 LIB libspdk_blob_bdev.a 00:02:38.052 LIB libspdk_accel_dsa.a 00:02:38.052 LIB libspdk_accel_ioat.a 00:02:38.052 LIB libspdk_accel_iaa.a 00:02:38.052 LIB libspdk_accel_error.a 00:02:38.052 CC module/bdev/delay/vbdev_delay.o 00:02:38.052 CC module/bdev/lvol/vbdev_lvol.o 00:02:38.052 CC module/bdev/gpt/gpt.o 00:02:38.052 CC module/bdev/error/vbdev_error.o 00:02:38.052 CC module/bdev/malloc/bdev_malloc.o 00:02:38.052 CC module/blobfs/bdev/blobfs_bdev.o 00:02:38.052 CC module/bdev/null/bdev_null.o 00:02:38.052 CC module/bdev/passthru/vbdev_passthru.o 00:02:38.052 CC module/bdev/nvme/bdev_nvme.o 00:02:38.310 CC module/bdev/gpt/vbdev_gpt.o 00:02:38.310 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:38.310 LIB libspdk_sock_posix.a 00:02:38.310 CC module/bdev/error/vbdev_error_rpc.o 00:02:38.310 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:38.310 CC module/bdev/null/bdev_null_rpc.o 00:02:38.310 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:38.310 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:38.310 LIB libspdk_blobfs_bdev.a 00:02:38.310 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:38.310 LIB libspdk_bdev_gpt.a 00:02:38.310 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:38.310 LIB libspdk_bdev_passthru.a 00:02:38.310 LIB libspdk_bdev_error.a 00:02:38.310 LIB libspdk_bdev_null.a 00:02:38.310 CC module/bdev/raid/bdev_raid.o 00:02:38.310 LIB libspdk_bdev_delay.a 00:02:38.310 LIB libspdk_bdev_malloc.a 00:02:38.568 CC module/bdev/aio/bdev_aio.o 00:02:38.568 CC module/bdev/split/vbdev_split.o 00:02:38.568 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:38.568 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:38.568 CC module/bdev/ftl/bdev_ftl.o 00:02:38.568 CC module/bdev/daos/bdev_daos.o 00:02:38.568 LIB libspdk_bdev_lvol.a 00:02:38.568 CC module/bdev/daos/bdev_daos_rpc.o 00:02:38.568 CC module/bdev/split/vbdev_split_rpc.o 00:02:38.826 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:38.826 CC module/bdev/aio/bdev_aio_rpc.o 00:02:38.826 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:38.826 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:38.826 LIB libspdk_bdev_daos.a 00:02:38.826 CC module/bdev/nvme/nvme_rpc.o 00:02:38.826 LIB libspdk_bdev_split.a 00:02:38.826 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:38.826 CC module/bdev/raid/bdev_raid_rpc.o 00:02:38.826 LIB libspdk_bdev_zone_block.a 00:02:38.826 CC module/bdev/raid/bdev_raid_sb.o 00:02:38.826 LIB libspdk_bdev_aio.a 00:02:38.826 CC module/bdev/raid/raid0.o 00:02:38.826 CC module/bdev/nvme/bdev_mdns_client.o 00:02:38.826 CC module/bdev/raid/raid1.o 00:02:38.826 LIB libspdk_bdev_ftl.a 00:02:38.826 CC module/bdev/raid/concat.o 00:02:38.826 CC module/bdev/nvme/vbdev_opal.o 00:02:39.087 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:39.087 LIB libspdk_bdev_virtio.a 00:02:39.087 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:39.087 LIB libspdk_bdev_raid.a 00:02:39.087 LIB libspdk_bdev_nvme.a 00:02:39.349 CC module/event/subsystems/vmd/vmd.o 00:02:39.349 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:39.349 CC module/event/subsystems/iobuf/iobuf.o 00:02:39.349 CC module/event/subsystems/sock/sock.o 00:02:39.349 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:39.349 CC module/event/subsystems/scheduler/scheduler.o 00:02:39.349 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:39.607 LIB libspdk_event_vhost_blk.a 00:02:39.607 LIB libspdk_event_sock.a 00:02:39.607 LIB libspdk_event_scheduler.a 00:02:39.607 LIB libspdk_event_vmd.a 00:02:39.607 LIB libspdk_event_iobuf.a 00:02:39.865 CC module/event/subsystems/accel/accel.o 00:02:39.865 LIB libspdk_event_accel.a 00:02:40.122 CC module/event/subsystems/bdev/bdev.o 00:02:40.123 LIB libspdk_event_bdev.a 00:02:40.380 CC module/event/subsystems/nbd/nbd.o 00:02:40.380 CC module/event/subsystems/scsi/scsi.o 00:02:40.380 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:40.380 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:40.380 LIB libspdk_event_nbd.a 00:02:40.380 LIB libspdk_event_scsi.a 00:02:40.638 LIB libspdk_event_nvmf.a 00:02:40.638 CC module/event/subsystems/iscsi/iscsi.o 00:02:40.638 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:40.638 LIB libspdk_event_vhost_scsi.a 00:02:40.638 LIB libspdk_event_iscsi.a 00:02:40.896 CXX app/trace/trace.o 00:02:40.896 TEST_HEADER include/spdk/rpc.h 00:02:40.896 CC app/trace_record/trace_record.o 00:02:40.896 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:40.896 TEST_HEADER include/spdk/accel_module.h 00:02:40.896 TEST_HEADER include/spdk/bit_pool.h 00:02:40.896 TEST_HEADER include/spdk/ioat.h 00:02:40.896 TEST_HEADER include/spdk/blobfs.h 00:02:40.896 TEST_HEADER include/spdk/pipe.h 00:02:40.896 TEST_HEADER include/spdk/accel.h 00:02:40.896 TEST_HEADER include/spdk/version.h 00:02:40.896 TEST_HEADER include/spdk/trace_parser.h 00:02:40.896 TEST_HEADER include/spdk/opal_spec.h 00:02:40.896 TEST_HEADER include/spdk/uuid.h 00:02:40.896 TEST_HEADER include/spdk/bdev.h 00:02:40.896 TEST_HEADER include/spdk/hexlify.h 00:02:40.896 TEST_HEADER include/spdk/likely.h 00:02:40.896 TEST_HEADER include/spdk/vhost.h 00:02:40.896 TEST_HEADER include/spdk/memory.h 00:02:40.896 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:40.896 TEST_HEADER include/spdk/dma.h 00:02:40.896 TEST_HEADER include/spdk/nbd.h 00:02:40.896 CC examples/accel/perf/accel_perf.o 00:02:40.896 TEST_HEADER include/spdk/env.h 00:02:40.896 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.896 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.896 TEST_HEADER include/spdk/init.h 00:02:40.896 TEST_HEADER include/spdk/fd_group.h 00:02:40.896 TEST_HEADER include/spdk/bdev_module.h 00:02:40.896 TEST_HEADER include/spdk/opal.h 00:02:40.896 TEST_HEADER include/spdk/event.h 00:02:40.896 TEST_HEADER include/spdk/base64.h 00:02:40.896 CC test/blobfs/mkfs/mkfs.o 00:02:40.896 TEST_HEADER include/spdk/nvmf.h 00:02:40.896 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.896 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.896 TEST_HEADER include/spdk/fd.h 00:02:40.896 TEST_HEADER include/spdk/barrier.h 00:02:40.896 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.896 TEST_HEADER include/spdk/zipf.h 00:02:40.896 TEST_HEADER include/spdk/scheduler.h 00:02:40.896 TEST_HEADER include/spdk/dif.h 00:02:40.896 TEST_HEADER include/spdk/scsi_spec.h 00:02:40.896 TEST_HEADER include/spdk/blob.h 00:02:40.896 CC examples/blob/hello_world/hello_blob.o 00:02:40.896 CC examples/bdev/hello_world/hello_bdev.o 00:02:40.896 CC test/accel/dif/dif.o 00:02:40.896 TEST_HEADER include/spdk/cpuset.h 00:02:40.896 TEST_HEADER include/spdk/thread.h 00:02:40.896 CC test/bdev/bdevio/bdevio.o 00:02:40.896 TEST_HEADER include/spdk/tree.h 00:02:40.896 TEST_HEADER include/spdk/xor.h 00:02:40.896 CC test/app/bdev_svc/bdev_svc.o 00:02:40.896 TEST_HEADER include/spdk/assert.h 00:02:40.896 TEST_HEADER include/spdk/file.h 00:02:40.896 TEST_HEADER include/spdk/endian.h 00:02:40.896 TEST_HEADER include/spdk/notify.h 00:02:40.896 TEST_HEADER include/spdk/util.h 00:02:40.896 TEST_HEADER include/spdk/log.h 00:02:40.896 TEST_HEADER include/spdk/sock.h 00:02:41.155 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:41.155 TEST_HEADER include/spdk/config.h 00:02:41.155 TEST_HEADER include/spdk/histogram_data.h 00:02:41.155 TEST_HEADER include/spdk/nvme_intel.h 00:02:41.155 TEST_HEADER include/spdk/idxd_spec.h 00:02:41.155 TEST_HEADER include/spdk/crc16.h 00:02:41.155 TEST_HEADER include/spdk/bdev_zone.h 00:02:41.155 TEST_HEADER include/spdk/stdinc.h 00:02:41.155 TEST_HEADER include/spdk/vmd.h 00:02:41.155 TEST_HEADER include/spdk/scsi.h 00:02:41.155 TEST_HEADER include/spdk/jsonrpc.h 00:02:41.155 TEST_HEADER include/spdk/blob_bdev.h 00:02:41.155 TEST_HEADER include/spdk/crc32.h 00:02:41.155 TEST_HEADER include/spdk/nvmf_transport.h 00:02:41.155 TEST_HEADER include/spdk/idxd.h 00:02:41.155 TEST_HEADER include/spdk/crc64.h 00:02:41.155 TEST_HEADER include/spdk/nvme.h 00:02:41.155 TEST_HEADER include/spdk/iscsi_spec.h 00:02:41.155 TEST_HEADER include/spdk/queue.h 00:02:41.155 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:41.155 TEST_HEADER include/spdk/lvol.h 00:02:41.155 TEST_HEADER include/spdk/ftl.h 00:02:41.155 TEST_HEADER include/spdk/trace.h 00:02:41.155 TEST_HEADER include/spdk/ioat_spec.h 00:02:41.155 TEST_HEADER include/spdk/conf.h 00:02:41.155 TEST_HEADER include/spdk/ublk.h 00:02:41.155 TEST_HEADER include/spdk/bit_array.h 00:02:41.155 TEST_HEADER include/spdk/pci_ids.h 00:02:41.155 TEST_HEADER include/spdk/nvme_spec.h 00:02:41.155 TEST_HEADER include/spdk/string.h 00:02:41.155 TEST_HEADER include/spdk/gpt_spec.h 00:02:41.155 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:41.155 TEST_HEADER include/spdk/json.h 00:02:41.155 TEST_HEADER include/spdk/reduce.h 00:02:41.155 TEST_HEADER include/spdk/mmio.h 00:02:41.155 CXX test/cpp_headers/rpc.o 00:02:41.155 LINK spdk_trace_record 00:02:41.155 LINK mkfs 00:02:41.155 LINK bdev_svc 00:02:41.155 LINK hello_blob 00:02:41.155 LINK hello_bdev 00:02:41.155 LINK spdk_trace 00:02:41.413 CXX test/cpp_headers/vfio_user_spec.o 00:02:41.413 LINK accel_perf 00:02:41.413 LINK dif 00:02:41.413 LINK bdevio 00:02:41.413 CXX test/cpp_headers/accel_module.o 00:02:41.413 CXX test/cpp_headers/bit_pool.o 00:02:41.672 CC app/nvmf_tgt/nvmf_main.o 00:02:41.672 CXX test/cpp_headers/ioat.o 00:02:41.672 CC app/iscsi_tgt/iscsi_tgt.o 00:02:41.672 LINK nvmf_tgt 00:02:41.672 CXX test/cpp_headers/blobfs.o 00:02:41.930 LINK iscsi_tgt 00:02:41.930 CXX test/cpp_headers/pipe.o 00:02:41.930 CC examples/ioat/perf/perf.o 00:02:41.930 CXX test/cpp_headers/accel.o 00:02:42.188 LINK ioat_perf 00:02:42.188 CXX test/cpp_headers/version.o 00:02:42.188 CXX test/cpp_headers/trace_parser.o 00:02:42.446 CXX test/cpp_headers/opal_spec.o 00:02:42.446 CXX test/cpp_headers/uuid.o 00:02:42.705 CXX test/cpp_headers/bdev.o 00:02:42.705 CC examples/ioat/verify/verify.o 00:02:42.705 CXX test/cpp_headers/hexlify.o 00:02:42.705 LINK verify 00:02:42.963 CXX test/cpp_headers/likely.o 00:02:42.963 CXX test/cpp_headers/vhost.o 00:02:43.222 CXX test/cpp_headers/memory.o 00:02:43.222 CXX test/cpp_headers/vfio_user_pci.o 00:02:43.222 CC test/dma/test_dma/test_dma.o 00:02:43.222 CC examples/nvme/hello_world/hello_world.o 00:02:43.480 CC examples/sock/hello_world/hello_sock.o 00:02:43.480 CXX test/cpp_headers/dma.o 00:02:43.480 LINK hello_world 00:02:43.480 CXX test/cpp_headers/nbd.o 00:02:43.480 CXX test/cpp_headers/env.o 00:02:43.480 LINK hello_sock 00:02:43.480 LINK test_dma 00:02:43.738 CC test/env/mem_callbacks/mem_callbacks.o 00:02:43.738 CXX test/cpp_headers/nvme_zns.o 00:02:43.738 CC examples/blob/cli/blobcli.o 00:02:43.738 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.738 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:43.738 CXX test/cpp_headers/env_dpdk.o 00:02:43.997 CXX test/cpp_headers/init.o 00:02:43.997 LINK blobcli 00:02:43.997 LINK nvme_fuzz 00:02:43.997 LINK mem_callbacks 00:02:43.997 CXX test/cpp_headers/fd_group.o 00:02:44.255 LINK bdevperf 00:02:44.255 CXX test/cpp_headers/bdev_module.o 00:02:44.255 CC examples/vmd/lsvmd/lsvmd.o 00:02:44.255 CC examples/nvme/reconnect/reconnect.o 00:02:44.255 CC test/env/vtophys/vtophys.o 00:02:44.255 LINK lsvmd 00:02:44.513 CXX test/cpp_headers/opal.o 00:02:44.513 CC app/spdk_tgt/spdk_tgt.o 00:02:44.513 CC app/spdk_lspci/spdk_lspci.o 00:02:44.513 LINK vtophys 00:02:44.513 LINK reconnect 00:02:44.513 CXX test/cpp_headers/event.o 00:02:44.513 LINK spdk_lspci 00:02:44.771 LINK spdk_tgt 00:02:44.771 CXX test/cpp_headers/base64.o 00:02:44.771 CXX test/cpp_headers/nvmf.o 00:02:45.029 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:45.029 CXX test/cpp_headers/nvmf_spec.o 00:02:45.029 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:45.029 CC examples/vmd/led/led.o 00:02:45.029 CXX test/cpp_headers/blobfs_bdev.o 00:02:45.288 LINK env_dpdk_post_init 00:02:45.288 LINK led 00:02:45.288 CXX test/cpp_headers/fd.o 00:02:45.288 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:45.288 CXX test/cpp_headers/barrier.o 00:02:45.550 CC app/spdk_nvme_perf/perf.o 00:02:45.550 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:45.550 LINK nvme_manage 00:02:45.814 CXX test/cpp_headers/zipf.o 00:02:45.814 LINK iscsi_fuzz 00:02:45.814 CXX test/cpp_headers/scheduler.o 00:02:45.814 LINK spdk_nvme_perf 00:02:45.814 CC test/event/event_perf/event_perf.o 00:02:46.073 CXX test/cpp_headers/dif.o 00:02:46.073 CC test/env/memory/memory_ut.o 00:02:46.073 LINK event_perf 00:02:46.073 CXX test/cpp_headers/scsi_spec.o 00:02:46.073 CC examples/nvmf/nvmf/nvmf.o 00:02:46.331 CXX test/cpp_headers/blob.o 00:02:46.331 CXX test/cpp_headers/cpuset.o 00:02:46.331 CC examples/util/zipf/zipf.o 00:02:46.331 LINK nvmf 00:02:46.589 LINK zipf 00:02:46.589 CXX test/cpp_headers/thread.o 00:02:46.589 CC examples/nvme/arbitration/arbitration.o 00:02:46.589 CC examples/thread/thread/thread_ex.o 00:02:46.589 LINK memory_ut 00:02:46.589 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:46.589 CXX test/cpp_headers/tree.o 00:02:46.589 CXX test/cpp_headers/xor.o 00:02:46.589 CC test/event/reactor/reactor.o 00:02:46.589 LINK thread 00:02:46.589 LINK arbitration 00:02:46.589 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:46.848 CC app/spdk_nvme_identify/identify.o 00:02:46.848 CC test/env/pci/pci_ut.o 00:02:46.848 CXX test/cpp_headers/assert.o 00:02:46.848 LINK reactor 00:02:46.848 CXX test/cpp_headers/file.o 00:02:46.848 CC examples/idxd/perf/perf.o 00:02:46.848 LINK vhost_fuzz 00:02:47.106 LINK pci_ut 00:02:47.106 CXX test/cpp_headers/endian.o 00:02:47.106 LINK idxd_perf 00:02:47.106 CXX test/cpp_headers/notify.o 00:02:47.106 LINK spdk_nvme_identify 00:02:47.365 CXX test/cpp_headers/util.o 00:02:47.365 CXX test/cpp_headers/log.o 00:02:47.365 CC test/event/reactor_perf/reactor_perf.o 00:02:47.365 CXX test/cpp_headers/sock.o 00:02:47.365 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:47.623 LINK reactor_perf 00:02:47.623 CC examples/nvme/hotplug/hotplug.o 00:02:47.623 CC test/lvol/esnap/esnap.o 00:02:47.623 CXX test/cpp_headers/config.o 00:02:47.623 CC test/app/histogram_perf/histogram_perf.o 00:02:47.623 CXX test/cpp_headers/histogram_data.o 00:02:47.623 CXX test/cpp_headers/nvme_intel.o 00:02:47.623 LINK histogram_perf 00:02:47.623 LINK hotplug 00:02:47.623 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:47.881 CXX test/cpp_headers/idxd_spec.o 00:02:47.881 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:47.881 LINK interrupt_tgt 00:02:47.881 CXX test/cpp_headers/crc16.o 00:02:47.881 LINK cmb_copy 00:02:48.140 CC app/spdk_nvme_discover/discovery_aer.o 00:02:48.140 CXX test/cpp_headers/bdev_zone.o 00:02:48.140 CC test/event/app_repeat/app_repeat.o 00:02:48.140 LINK spdk_nvme_discover 00:02:48.140 CXX test/cpp_headers/stdinc.o 00:02:48.399 CC test/app/jsoncat/jsoncat.o 00:02:48.399 LINK app_repeat 00:02:48.399 CXX test/cpp_headers/vmd.o 00:02:48.399 LINK jsoncat 00:02:48.399 CXX test/cpp_headers/scsi.o 00:02:48.657 CC examples/nvme/abort/abort.o 00:02:48.657 CXX test/cpp_headers/jsonrpc.o 00:02:48.916 CXX test/cpp_headers/blob_bdev.o 00:02:48.916 LINK abort 00:02:48.916 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:48.916 CXX test/cpp_headers/crc32.o 00:02:48.916 CC test/app/stub/stub.o 00:02:48.916 CXX test/cpp_headers/nvmf_transport.o 00:02:48.916 LINK pmr_persistence 00:02:48.916 CC test/event/scheduler/scheduler.o 00:02:48.916 CXX test/cpp_headers/idxd.o 00:02:48.916 CXX test/cpp_headers/crc64.o 00:02:49.173 CC app/spdk_top/spdk_top.o 00:02:49.173 LINK stub 00:02:49.173 CXX test/cpp_headers/nvme.o 00:02:49.173 CXX test/cpp_headers/iscsi_spec.o 00:02:49.173 CXX test/cpp_headers/queue.o 00:02:49.173 CXX test/cpp_headers/nvmf_cmd.o 00:02:49.173 LINK scheduler 00:02:49.173 CXX test/cpp_headers/lvol.o 00:02:49.173 CXX test/cpp_headers/ftl.o 00:02:49.430 CXX test/cpp_headers/trace.o 00:02:49.430 CXX test/cpp_headers/ioat_spec.o 00:02:49.430 CC app/vhost/vhost.o 00:02:49.430 CXX test/cpp_headers/conf.o 00:02:49.430 CXX test/cpp_headers/ublk.o 00:02:49.430 CXX test/cpp_headers/bit_array.o 00:02:49.430 LINK spdk_top 00:02:49.430 CXX test/cpp_headers/pci_ids.o 00:02:49.688 LINK vhost 00:02:49.688 CXX test/cpp_headers/nvme_spec.o 00:02:49.688 CXX test/cpp_headers/string.o 00:02:49.688 CXX test/cpp_headers/gpt_spec.o 00:02:49.688 CXX test/cpp_headers/nvme_ocssd.o 00:02:49.688 CXX test/cpp_headers/json.o 00:02:49.688 CXX test/cpp_headers/reduce.o 00:02:49.688 CC app/spdk_dd/spdk_dd.o 00:02:49.688 CXX test/cpp_headers/mmio.o 00:02:49.946 CC test/rpc_client/rpc_client_test.o 00:02:49.946 CC app/fio/nvme/fio_plugin.o 00:02:49.946 CC test/nvme/aer/aer.o 00:02:49.946 LINK esnap 00:02:49.946 CC test/thread/poller_perf/poller_perf.o 00:02:50.204 LINK spdk_dd 00:02:50.204 CC test/unit/lib/accel/accel.c/accel_ut.o 00:02:50.204 LINK rpc_client_test 00:02:50.204 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:50.204 LINK poller_perf 00:02:50.204 LINK aer 00:02:50.204 LINK histogram_ut 00:02:50.471 LINK spdk_nvme 00:02:50.471 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:02:50.471 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:02:50.732 CC test/unit/lib/blob/blob.c/blob_ut.o 00:02:50.732 CC test/thread/lock/spdk_lock.o 00:02:50.989 LINK blob_bdev_ut 00:02:50.989 CC test/nvme/reset/reset.o 00:02:51.246 CC test/nvme/sgl/sgl.o 00:02:51.246 LINK reset 00:02:51.246 CC app/fio/bdev/fio_plugin.o 00:02:51.246 LINK accel_ut 00:02:51.504 LINK sgl 00:02:51.504 LINK spdk_lock 00:02:51.762 LINK spdk_bdev 00:02:51.762 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:02:51.762 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:02:52.020 LINK tree_ut 00:02:52.020 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:02:52.278 CC test/nvme/e2edp/nvme_dp.o 00:02:52.278 CC test/unit/lib/bdev/part.c/part_ut.o 00:02:52.278 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:02:52.278 CC test/nvme/overhead/overhead.o 00:02:52.278 LINK nvme_dp 00:02:52.535 LINK blobfs_async_ut 00:02:52.535 LINK scsi_nvme_ut 00:02:52.535 LINK overhead 00:02:52.535 CC test/nvme/err_injection/err_injection.o 00:02:52.793 LINK blobfs_sync_ut 00:02:52.793 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:02:52.793 LINK err_injection 00:02:52.793 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:02:53.051 LINK blobfs_bdev_ut 00:02:53.051 LINK gpt_ut 00:02:53.051 CC test/nvme/startup/startup.o 00:02:53.309 LINK startup 00:02:53.309 CC test/unit/lib/dma/dma.c/dma_ut.o 00:02:53.309 CC test/nvme/reserve/reserve.o 00:02:53.309 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:02:53.309 LINK bdev_ut 00:02:53.309 LINK reserve 00:02:53.309 CC test/nvme/simple_copy/simple_copy.o 00:02:53.567 LINK dma_ut 00:02:53.567 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:02:53.567 LINK simple_copy 00:02:53.567 CC test/nvme/connect_stress/connect_stress.o 00:02:53.825 CC test/unit/lib/event/app.c/app_ut.o 00:02:53.825 LINK connect_stress 00:02:53.825 LINK part_ut 00:02:53.825 LINK vbdev_lvol_ut 00:02:54.083 CC test/nvme/boot_partition/boot_partition.o 00:02:54.083 LINK app_ut 00:02:54.083 LINK boot_partition 00:02:54.083 LINK blob_ut 00:02:54.083 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:02:54.341 CC test/nvme/compliance/nvme_compliance.o 00:02:54.341 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:02:54.341 CC test/nvme/fused_ordering/fused_ordering.o 00:02:54.341 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:54.341 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:02:54.341 LINK fused_ordering 00:02:54.341 LINK bdev_zone_ut 00:02:54.599 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:02:54.599 LINK doorbell_aers 00:02:54.599 LINK nvme_compliance 00:02:54.599 CC test/nvme/fdp/fdp.o 00:02:54.599 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:02:54.858 LINK ioat_ut 00:02:54.858 LINK fdp 00:02:54.858 LINK reactor_ut 00:02:54.858 CC test/nvme/cuse/cuse.o 00:02:55.119 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:02:55.119 LINK vbdev_zone_block_ut 00:02:55.119 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:02:55.119 LINK bdev_raid_ut 00:02:55.378 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:02:55.378 LINK bdev_ut 00:02:55.378 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:02:55.378 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:02:55.378 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:02:55.378 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:02:55.636 LINK init_grp_ut 00:02:55.636 LINK cuse 00:02:55.636 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:02:55.636 LINK jsonrpc_server_ut 00:02:55.636 LINK bdev_raid_sb_ut 00:02:55.636 LINK conn_ut 00:02:55.636 CC test/unit/lib/log/log.c/log_ut.o 00:02:55.636 CC test/unit/lib/iscsi/param.c/param_ut.o 00:02:55.893 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:02:55.893 LINK concat_ut 00:02:55.893 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:02:55.893 LINK log_ut 00:02:55.893 CC test/unit/lib/notify/notify.c/notify_ut.o 00:02:55.893 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:02:56.151 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:02:56.151 LINK param_ut 00:02:56.151 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:02:56.151 LINK notify_ut 00:02:56.151 LINK json_parse_ut 00:02:56.151 LINK raid1_ut 00:02:56.151 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:02:56.409 LINK portal_grp_ut 00:02:56.409 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:02:56.409 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:02:56.409 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:02:56.667 LINK tgt_node_ut 00:02:56.667 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:02:56.667 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:02:56.667 LINK lvol_ut 00:02:56.924 LINK json_util_ut 00:02:56.924 LINK iscsi_ut 00:02:56.924 LINK dev_ut 00:02:56.924 LINK nvme_ut 00:02:56.924 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:02:57.187 CC test/unit/lib/sock/sock.c/sock_ut.o 00:02:57.187 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:02:57.187 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:02:57.187 LINK lun_ut 00:02:57.187 CC test/unit/lib/thread/thread.c/thread_ut.o 00:02:57.187 LINK scsi_ut 00:02:57.445 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:02:57.445 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:02:57.702 LINK json_write_ut 00:02:57.702 LINK bdev_nvme_ut 00:02:57.702 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:02:57.702 LINK scsi_pr_ut 00:02:57.961 LINK sock_ut 00:02:57.961 LINK nvme_ctrlr_cmd_ut 00:02:57.961 LINK ctrlr_ut 00:02:57.961 LINK scsi_bdev_ut 00:02:57.961 CC test/unit/lib/util/base64.c/base64_ut.o 00:02:57.961 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:02:57.961 LINK nvme_ctrlr_ut 00:02:58.219 CC test/unit/lib/sock/posix.c/posix_ut.o 00:02:58.219 LINK base64_ut 00:02:58.219 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:02:58.219 LINK thread_ut 00:02:58.219 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:02:58.219 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:02:58.219 LINK pci_event_ut 00:02:58.219 LINK tcp_ut 00:02:58.219 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:02:58.477 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:02:58.477 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:02:58.477 LINK subsystem_ut 00:02:58.477 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:02:58.735 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:02:58.735 LINK bit_array_ut 00:02:58.735 LINK posix_ut 00:02:58.735 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:02:58.735 LINK ctrlr_bdev_ut 00:02:58.735 LINK iobuf_ut 00:02:58.735 LINK subsystem_ut 00:02:58.735 LINK rpc_ut 00:02:58.993 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:02:58.993 LINK nvme_ctrlr_ocssd_cmd_ut 00:02:58.993 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:02:58.993 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:02:58.993 LINK cpuset_ut 00:02:58.993 LINK idxd_user_ut 00:02:58.993 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:02:58.993 LINK ctrlr_discovery_ut 00:02:59.250 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:02:59.250 CC test/unit/lib/rdma/common.c/common_ut.o 00:02:59.250 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:02:59.250 LINK nvmf_ut 00:02:59.250 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:02:59.250 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:02:59.250 LINK ftl_l2p_ut 00:02:59.250 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:02:59.509 LINK crc16_ut 00:02:59.509 LINK common_ut 00:02:59.509 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:02:59.509 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:02:59.509 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:02:59.509 CC test/unit/lib/util/dif.c/dif_ut.o 00:02:59.767 LINK nvme_ns_ut 00:02:59.767 LINK idxd_ut 00:02:59.767 LINK crc32_ieee_ut 00:02:59.767 LINK crc32c_ut 00:02:59.767 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:02:59.767 LINK crc64_ut 00:02:59.767 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:00.024 CC test/unit/lib/util/math.c/math_ut.o 00:03:00.024 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:00.024 CC test/unit/lib/util/string.c/string_ut.o 00:03:00.024 LINK iov_ut 00:03:00.024 LINK math_ut 00:03:00.024 LINK vhost_ut 00:03:00.281 LINK ftl_band_ut 00:03:00.281 LINK string_ut 00:03:00.281 LINK pipe_ut 00:03:00.281 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:00.281 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:00.281 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:00.281 LINK dif_ut 00:03:00.539 LINK transport_ut 00:03:00.539 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:00.539 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:00.539 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:00.539 LINK xor_ut 00:03:00.539 LINK rdma_ut 00:03:00.539 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:00.539 LINK ftl_bitmap_ut 00:03:00.539 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:00.797 LINK ftl_io_ut 00:03:00.797 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:00.797 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:00.797 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:00.797 LINK ftl_mempool_ut 00:03:00.797 LINK nvme_ns_cmd_ut 00:03:01.056 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:01.056 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:01.056 LINK nvme_poll_group_ut 00:03:01.056 LINK ftl_mngt_ut 00:03:01.056 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:01.056 LINK nvme_ns_ocssd_cmd_ut 00:03:01.314 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:01.314 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:01.314 LINK nvme_quirks_ut 00:03:01.315 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:01.315 LINK nvme_pcie_ut 00:03:01.573 LINK ftl_layout_upgrade_ut 00:03:01.573 LINK ftl_sb_ut 00:03:01.573 LINK nvme_qpair_ut 00:03:01.573 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:01.573 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:01.573 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:01.831 LINK nvme_io_msg_ut 00:03:01.831 LINK nvme_transport_ut 00:03:02.089 LINK nvme_fabric_ut 00:03:02.089 LINK nvme_opal_ut 00:03:02.089 LINK nvme_pcie_common_ut 00:03:02.347 LINK nvme_tcp_ut 00:03:02.605 LINK nvme_cuse_ut 00:03:03.171 LINK nvme_rdma_ut 00:03:03.430 ************************************ 00:03:03.430 END TEST unittest_build 00:03:03.430 ************************************ 00:03:03.430 00:03:03.430 real 0m59.770s 00:03:03.430 user 4m55.609s 00:03:03.430 sys 1m32.491s 00:03:03.430 04:40:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:03.430 04:40:17 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.430 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:03.430 04:40:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:03.430 04:40:17 -- nvmf/common.sh@7 -- # uname -s 00:03:03.430 04:40:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:03.430 04:40:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:03.430 04:40:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:03.430 04:40:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:03.430 04:40:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:03.430 04:40:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:03.430 04:40:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:03.430 04:40:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:03.430 04:40:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:03.431 04:40:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:03.431 04:40:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a56cc220-eae5-45ac-82e9-235cfe24dc68 00:03:03.431 04:40:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=a56cc220-eae5-45ac-82e9-235cfe24dc68 00:03:03.431 04:40:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:03.431 04:40:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:03.431 04:40:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:03.431 04:40:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:03.431 04:40:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:03.431 04:40:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:03.431 04:40:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:03.431 04:40:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:03.431 04:40:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:03.431 04:40:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:03.431 04:40:17 -- paths/export.sh@5 -- # export PATH 00:03:03.431 04:40:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:03.431 04:40:17 -- nvmf/common.sh@46 -- # : 0 00:03:03.431 04:40:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:03.431 04:40:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:03.431 04:40:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:03.431 04:40:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:03.431 04:40:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:03.431 04:40:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:03.431 04:40:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:03.431 04:40:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:03.431 04:40:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:03.431 04:40:17 -- spdk/autotest.sh@32 -- # uname -s 00:03:03.431 04:40:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:03.431 04:40:17 -- spdk/autotest.sh@33 -- # old_core_pattern=core 00:03:03.431 04:40:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:03.431 04:40:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:03.431 04:40:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:03.431 04:40:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:03.431 modprobe: FATAL: Module nbd not found. 00:03:03.431 04:40:17 -- spdk/autotest.sh@44 -- # true 00:03:03.431 04:40:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:03.431 04:40:17 -- spdk/autotest.sh@46 -- # udevadm=/sbin/udevadm 00:03:03.431 04:40:17 -- spdk/autotest.sh@48 -- # udevadm_pid=30569 00:03:03.431 04:40:17 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:03.431 04:40:17 -- spdk/autotest.sh@47 -- # /sbin/udevadm monitor --property 00:03:03.431 04:40:17 -- spdk/autotest.sh@54 -- # echo 30571 00:03:03.431 04:40:17 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:03.431 04:40:17 -- spdk/autotest.sh@56 -- # echo 30572 00:03:03.431 04:40:17 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:03.431 04:40:17 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:03.431 04:40:17 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:03.431 04:40:17 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:03.431 04:40:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:03.431 04:40:17 -- common/autotest_common.sh@10 -- # set +x 00:03:03.431 04:40:17 -- spdk/autotest.sh@70 -- # create_test_list 00:03:03.431 04:40:17 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:03.431 04:40:17 -- common/autotest_common.sh@10 -- # set +x 00:03:03.690 04:40:17 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:03.690 04:40:17 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:03.690 04:40:17 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:03.690 04:40:17 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:03.690 04:40:17 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:03.690 04:40:17 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:03.690 04:40:17 -- common/autotest_common.sh@1440 -- # uname 00:03:03.690 04:40:17 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:03.690 04:40:17 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:03.690 04:40:17 -- common/autotest_common.sh@1460 -- # uname 00:03:03.690 04:40:17 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:03.690 04:40:17 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:03.690 04:40:17 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:03.690 04:40:17 -- spdk/autotest.sh@83 -- # hash lcov 00:03:03.690 04:40:17 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:03.690 04:40:17 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:03.690 --rc lcov_branch_coverage=1 00:03:03.690 --rc lcov_function_coverage=1 00:03:03.690 --rc genhtml_branch_coverage=1 00:03:03.690 --rc genhtml_function_coverage=1 00:03:03.690 --rc genhtml_legend=1 00:03:03.690 --rc geninfo_all_blocks=1 00:03:03.690 ' 00:03:03.690 04:40:17 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:03.690 --rc lcov_branch_coverage=1 00:03:03.690 --rc lcov_function_coverage=1 00:03:03.690 --rc genhtml_branch_coverage=1 00:03:03.690 --rc genhtml_function_coverage=1 00:03:03.690 --rc genhtml_legend=1 00:03:03.690 --rc geninfo_all_blocks=1 00:03:03.690 ' 00:03:03.690 04:40:17 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:03.690 --rc lcov_branch_coverage=1 00:03:03.690 --rc lcov_function_coverage=1 00:03:03.691 --rc genhtml_branch_coverage=1 00:03:03.691 --rc genhtml_function_coverage=1 00:03:03.691 --rc genhtml_legend=1 00:03:03.691 --rc geninfo_all_blocks=1 00:03:03.691 --no-external' 00:03:03.691 04:40:17 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:03.691 --rc lcov_branch_coverage=1 00:03:03.691 --rc lcov_function_coverage=1 00:03:03.691 --rc genhtml_branch_coverage=1 00:03:03.691 --rc genhtml_function_coverage=1 00:03:03.691 --rc genhtml_legend=1 00:03:03.691 --rc geninfo_all_blocks=1 00:03:03.691 --no-external' 00:03:03.691 04:40:17 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:03.691 lcov: LCOV version 1.15 00:03:03.691 04:40:17 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:11.841 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:11.841 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:11.841 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:11.841 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:11.841 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:11.841 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:29.925 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:29.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:29.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:29.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:29.927 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:29.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:08.669 04:41:21 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:08.669 04:41:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:08.669 04:41:21 -- common/autotest_common.sh@10 -- # set +x 00:04:08.669 04:41:21 -- spdk/autotest.sh@102 -- # rm -f 00:04:08.669 04:41:21 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.669 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:08.669 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:08.669 04:41:21 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:08.669 04:41:21 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:08.669 04:41:21 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:08.669 04:41:21 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:08.669 04:41:21 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:08.669 04:41:21 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:08.669 04:41:21 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:08.669 04:41:21 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.669 04:41:21 -- common/autotest_common.sh@1649 -- # return 1 00:04:08.669 04:41:21 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:08.669 04:41:21 -- spdk/autotest.sh@121 -- # grep -v p 00:04:08.669 04:41:21 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:08.669 04:41:21 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:08.669 04:41:21 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:08.669 04:41:21 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:08.669 04:41:21 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:08.669 04:41:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:08.669 No valid GPT data, bailing 00:04:08.669 04:41:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.669 04:41:21 -- scripts/common.sh@393 -- # pt= 00:04:08.669 04:41:21 -- scripts/common.sh@394 -- # return 1 00:04:08.669 04:41:21 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:08.669 1+0 records in 00:04:08.669 1+0 records out 00:04:08.669 1048576 bytes (1.0 MB) copied, 0.00456658 s, 230 MB/s 00:04:08.669 04:41:21 -- spdk/autotest.sh@129 -- # sync 00:04:08.669 04:41:21 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.669 04:41:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.669 04:41:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:08.669 04:41:22 -- spdk/autotest.sh@135 -- # uname -s 00:04:08.669 04:41:22 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:08.669 04:41:22 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:08.669 04:41:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.669 04:41:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.669 04:41:22 -- common/autotest_common.sh@10 -- # set +x 00:04:08.669 ************************************ 00:04:08.669 START TEST setup.sh 00:04:08.669 ************************************ 00:04:08.669 04:41:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:08.669 * Looking for test storage... 00:04:08.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.957 04:41:22 -- setup/test-setup.sh@10 -- # uname -s 00:04:08.957 04:41:22 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:08.957 04:41:22 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:08.957 04:41:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.957 04:41:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.957 04:41:22 -- common/autotest_common.sh@10 -- # set +x 00:04:08.957 ************************************ 00:04:08.957 START TEST acl 00:04:08.957 ************************************ 00:04:08.957 04:41:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:08.957 * Looking for test storage... 00:04:08.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.957 04:41:23 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:08.957 04:41:23 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:08.957 04:41:23 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:08.957 04:41:23 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:08.957 04:41:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:08.957 04:41:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:08.957 04:41:23 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:08.957 04:41:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.957 04:41:23 -- common/autotest_common.sh@1649 -- # return 1 00:04:08.957 04:41:23 -- setup/acl.sh@12 -- # devs=() 00:04:08.957 04:41:23 -- setup/acl.sh@12 -- # declare -a devs 00:04:08.957 04:41:23 -- setup/acl.sh@13 -- # drivers=() 00:04:08.957 04:41:23 -- setup/acl.sh@13 -- # declare -A drivers 00:04:08.957 04:41:23 -- setup/acl.sh@51 -- # setup reset 00:04:08.957 04:41:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.957 04:41:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.240 04:41:23 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:09.240 04:41:23 -- setup/acl.sh@16 -- # local dev driver 00:04:09.240 04:41:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.240 04:41:23 -- setup/acl.sh@15 -- # setup output status 00:04:09.240 04:41:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.240 04:41:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:09.499 Hugepages 00:04:09.499 node hugesize free / total 00:04:09.499 04:41:23 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:09.499 04:41:23 -- setup/acl.sh@19 -- # continue 00:04:09.499 04:41:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.499 00:04:09.499 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.499 04:41:23 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:09.499 04:41:23 -- setup/acl.sh@19 -- # continue 00:04:09.499 04:41:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.499 04:41:23 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:09.499 04:41:23 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:09.499 04:41:23 -- setup/acl.sh@20 -- # continue 00:04:09.499 04:41:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.499 04:41:23 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:09.499 04:41:23 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:09.499 04:41:23 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:09.499 04:41:23 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:09.499 04:41:23 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:09.499 04:41:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.499 04:41:23 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:09.499 04:41:23 -- setup/acl.sh@54 -- # run_test denied denied 00:04:09.499 04:41:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.499 04:41:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.499 04:41:23 -- common/autotest_common.sh@10 -- # set +x 00:04:09.499 ************************************ 00:04:09.499 START TEST denied 00:04:09.499 ************************************ 00:04:09.499 04:41:23 -- common/autotest_common.sh@1104 -- # denied 00:04:09.499 04:41:23 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:09.499 04:41:23 -- setup/acl.sh@38 -- # setup output config 00:04:09.499 04:41:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.499 04:41:23 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:09.499 04:41:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.759 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:09.759 04:41:23 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:09.759 04:41:23 -- setup/acl.sh@28 -- # local dev driver 00:04:09.759 04:41:23 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:09.759 04:41:23 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:09.759 04:41:23 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:09.759 04:41:23 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:09.759 04:41:23 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:09.759 04:41:23 -- setup/acl.sh@41 -- # setup reset 00:04:09.759 04:41:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.759 04:41:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.327 00:04:10.327 real 0m0.716s 00:04:10.327 user 0m0.319s 00:04:10.327 sys 0m0.451s 00:04:10.327 ************************************ 00:04:10.327 END TEST denied 00:04:10.327 ************************************ 00:04:10.327 04:41:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.327 04:41:24 -- common/autotest_common.sh@10 -- # set +x 00:04:10.327 04:41:24 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:10.327 04:41:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.327 04:41:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.327 04:41:24 -- common/autotest_common.sh@10 -- # set +x 00:04:10.327 ************************************ 00:04:10.327 START TEST allowed 00:04:10.327 ************************************ 00:04:10.327 04:41:24 -- common/autotest_common.sh@1104 -- # allowed 00:04:10.327 04:41:24 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:10.327 04:41:24 -- setup/acl.sh@45 -- # setup output config 00:04:10.327 04:41:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.327 04:41:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:10.327 04:41:24 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:10.896 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.896 04:41:24 -- setup/acl.sh@47 -- # verify 00:04:10.896 04:41:24 -- setup/acl.sh@28 -- # local dev driver 00:04:10.896 04:41:24 -- setup/acl.sh@48 -- # setup reset 00:04:10.896 04:41:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.896 04:41:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.156 ************************************ 00:04:11.156 END TEST allowed 00:04:11.156 ************************************ 00:04:11.156 00:04:11.156 real 0m0.802s 00:04:11.156 user 0m0.276s 00:04:11.156 sys 0m0.513s 00:04:11.156 04:41:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.156 04:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.156 00:04:11.156 real 0m2.273s 00:04:11.156 user 0m0.908s 00:04:11.156 sys 0m1.451s 00:04:11.156 04:41:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.156 04:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.156 ************************************ 00:04:11.156 END TEST acl 00:04:11.156 ************************************ 00:04:11.156 04:41:25 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:11.156 04:41:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.156 04:41:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.156 04:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.156 ************************************ 00:04:11.156 START TEST hugepages 00:04:11.156 ************************************ 00:04:11.156 04:41:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:11.156 * Looking for test storage... 00:04:11.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:11.156 04:41:25 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:11.156 04:41:25 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:11.156 04:41:25 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:11.156 04:41:25 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:11.156 04:41:25 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:11.156 04:41:25 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:11.156 04:41:25 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:11.156 04:41:25 -- setup/common.sh@18 -- # local node= 00:04:11.156 04:41:25 -- setup/common.sh@19 -- # local var val 00:04:11.156 04:41:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.156 04:41:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.156 04:41:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.156 04:41:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.156 04:41:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.156 04:41:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 4803436 kB' 'MemAvailable: 7435404 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2181624 kB' 'Inactive: 732632 kB' 'Active(anon): 88648 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2092976 kB' 'Inactive(file): 715944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 88528 kB' 'Mapped: 25244 kB' 'Shmem: 16896 kB' 'Slab: 171460 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 49984 kB' 'KernelStack: 3712 kB' 'PageTables: 7984 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4053416 kB' 'Committed_AS: 337448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38768 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.156 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.156 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.157 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.157 04:41:25 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.157 04:41:25 -- setup/common.sh@33 -- # echo 2048 00:04:11.157 04:41:25 -- setup/common.sh@33 -- # return 0 00:04:11.157 04:41:25 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:11.157 04:41:25 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:11.157 04:41:25 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:11.157 04:41:25 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:11.157 04:41:25 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:11.157 04:41:25 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:11.157 04:41:25 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:11.157 04:41:25 -- setup/hugepages.sh@207 -- # get_nodes 00:04:11.157 04:41:25 -- setup/hugepages.sh@27 -- # local node 00:04:11.157 04:41:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.157 04:41:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:11.157 04:41:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.157 04:41:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.157 04:41:25 -- setup/hugepages.sh@208 -- # clear_hp 00:04:11.157 04:41:25 -- setup/hugepages.sh@37 -- # local node hp 00:04:11.157 04:41:25 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:11.157 04:41:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.157 04:41:25 -- setup/hugepages.sh@41 -- # echo 0 00:04:11.157 04:41:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.157 04:41:25 -- setup/hugepages.sh@41 -- # echo 0 00:04:11.157 04:41:25 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:11.157 04:41:25 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:11.157 04:41:25 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:11.157 04:41:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.157 04:41:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.157 04:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.417 ************************************ 00:04:11.417 START TEST default_setup 00:04:11.417 ************************************ 00:04:11.417 04:41:25 -- common/autotest_common.sh@1104 -- # default_setup 00:04:11.417 04:41:25 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:11.417 04:41:25 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.417 04:41:25 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.417 04:41:25 -- setup/hugepages.sh@51 -- # shift 00:04:11.417 04:41:25 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:11.417 04:41:25 -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.417 04:41:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.417 04:41:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.417 04:41:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.417 04:41:25 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:11.417 04:41:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.417 04:41:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.417 04:41:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.417 04:41:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.417 04:41:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.417 04:41:25 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.417 04:41:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.417 04:41:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:11.417 04:41:25 -- setup/hugepages.sh@73 -- # return 0 00:04:11.417 04:41:25 -- setup/hugepages.sh@137 -- # setup output 00:04:11.417 04:41:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.417 04:41:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.680 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:11.680 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.680 04:41:25 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:11.680 04:41:25 -- setup/hugepages.sh@89 -- # local node 00:04:11.680 04:41:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.680 04:41:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.680 04:41:25 -- setup/hugepages.sh@92 -- # local surp 00:04:11.680 04:41:25 -- setup/hugepages.sh@93 -- # local resv 00:04:11.680 04:41:25 -- setup/hugepages.sh@94 -- # local anon 00:04:11.680 04:41:25 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:11.680 04:41:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.680 04:41:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.680 04:41:25 -- setup/common.sh@18 -- # local node= 00:04:11.680 04:41:25 -- setup/common.sh@19 -- # local var val 00:04:11.680 04:41:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.680 04:41:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.680 04:41:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.680 04:41:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.680 04:41:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.680 04:41:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.680 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.680 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.680 04:41:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6900416 kB' 'MemAvailable: 9532384 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2187612 kB' 'Inactive: 732632 kB' 'Active(anon): 94636 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2092976 kB' 'Inactive(file): 715944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 93868 kB' 'Mapped: 25244 kB' 'Shmem: 16896 kB' 'Slab: 171460 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 49984 kB' 'KernelStack: 3712 kB' 'PageTables: 7596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:11.680 04:41:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.680 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.680 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.680 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.680 04:41:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.680 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.680 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.680 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.680 04:41:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.681 04:41:25 -- setup/common.sh@33 -- # echo 8192 00:04:11.681 04:41:25 -- setup/common.sh@33 -- # return 0 00:04:11.681 04:41:25 -- setup/hugepages.sh@97 -- # anon=8192 00:04:11.681 04:41:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.681 04:41:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.681 04:41:25 -- setup/common.sh@18 -- # local node= 00:04:11.681 04:41:25 -- setup/common.sh@19 -- # local var val 00:04:11.681 04:41:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.681 04:41:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.681 04:41:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.681 04:41:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.681 04:41:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.681 04:41:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.681 04:41:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6900676 kB' 'MemAvailable: 9532644 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2187872 kB' 'Inactive: 732632 kB' 'Active(anon): 94896 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2092976 kB' 'Inactive(file): 715944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 93868 kB' 'Mapped: 25244 kB' 'Shmem: 16896 kB' 'Slab: 171460 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 49984 kB' 'KernelStack: 3712 kB' 'PageTables: 7596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.681 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.681 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.682 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.682 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.682 04:41:25 -- setup/common.sh@33 -- # echo 0 00:04:11.682 04:41:25 -- setup/common.sh@33 -- # return 0 00:04:11.682 04:41:25 -- setup/hugepages.sh@99 -- # surp=0 00:04:11.682 04:41:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.682 04:41:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.683 04:41:25 -- setup/common.sh@18 -- # local node= 00:04:11.683 04:41:25 -- setup/common.sh@19 -- # local var val 00:04:11.683 04:41:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.683 04:41:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.683 04:41:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.683 04:41:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.683 04:41:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.683 04:41:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6900368 kB' 'MemAvailable: 9532336 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2187612 kB' 'Inactive: 732632 kB' 'Active(anon): 94636 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2092976 kB' 'Inactive(file): 715944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 93480 kB' 'Mapped: 25244 kB' 'Shmem: 16896 kB' 'Slab: 171460 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 49984 kB' 'KernelStack: 3712 kB' 'PageTables: 7596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.683 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.683 04:41:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.684 04:41:25 -- setup/common.sh@33 -- # echo 0 00:04:11.684 04:41:25 -- setup/common.sh@33 -- # return 0 00:04:11.684 nr_hugepages=1024 00:04:11.684 resv_hugepages=0 00:04:11.684 surplus_hugepages=0 00:04:11.684 anon_hugepages=8192 00:04:11.684 04:41:25 -- setup/hugepages.sh@100 -- # resv=0 00:04:11.684 04:41:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.684 04:41:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.684 04:41:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.684 04:41:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:11.684 04:41:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.684 04:41:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.684 04:41:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.684 04:41:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.684 04:41:25 -- setup/common.sh@18 -- # local node= 00:04:11.684 04:41:25 -- setup/common.sh@19 -- # local var val 00:04:11.684 04:41:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.684 04:41:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.684 04:41:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.684 04:41:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.684 04:41:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.684 04:41:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6900300 kB' 'MemAvailable: 9532268 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2187612 kB' 'Inactive: 732632 kB' 'Active(anon): 94636 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2092976 kB' 'Inactive(file): 715944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 93868 kB' 'Mapped: 25244 kB' 'Shmem: 16896 kB' 'Slab: 171460 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 49984 kB' 'KernelStack: 3712 kB' 'PageTables: 7596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.684 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.684 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.945 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.945 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.946 04:41:25 -- setup/common.sh@33 -- # echo 1024 00:04:11.946 04:41:25 -- setup/common.sh@33 -- # return 0 00:04:11.946 04:41:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.946 04:41:25 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.946 04:41:25 -- setup/hugepages.sh@27 -- # local node 00:04:11.946 04:41:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.946 04:41:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.946 04:41:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.946 04:41:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.946 04:41:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.946 04:41:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.946 04:41:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.946 04:41:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.946 04:41:25 -- setup/common.sh@18 -- # local node=0 00:04:11.946 04:41:25 -- setup/common.sh@19 -- # local var val 00:04:11.946 04:41:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.946 04:41:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.946 04:41:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.946 04:41:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.946 04:41:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.946 04:41:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6900560 kB' 'MemUsed: 5400580 kB' 'Active: 2187612 kB' 'Inactive: 732632 kB' 'Active(anon): 94636 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2092976 kB' 'Inactive(file): 715944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2825816 kB' 'Mapped: 25244 kB' 'AnonPages: 93868 kB' 'Shmem: 16896 kB' 'KernelStack: 3712 kB' 'PageTables: 7596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171460 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 49984 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.946 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.946 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.947 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.947 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.947 04:41:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.947 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.947 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.947 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.947 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.947 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.947 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.947 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.947 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.947 04:41:25 -- setup/common.sh@32 -- # continue 00:04:11.947 04:41:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.947 04:41:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.947 04:41:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.947 04:41:25 -- setup/common.sh@33 -- # echo 0 00:04:11.947 04:41:25 -- setup/common.sh@33 -- # return 0 00:04:11.947 04:41:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.947 04:41:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.947 04:41:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.947 04:41:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.947 node0=1024 expecting 1024 00:04:11.947 ************************************ 00:04:11.947 END TEST default_setup 00:04:11.947 ************************************ 00:04:11.947 04:41:25 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.947 04:41:25 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.947 00:04:11.947 real 0m0.529s 00:04:11.947 user 0m0.200s 00:04:11.947 sys 0m0.318s 00:04:11.947 04:41:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.947 04:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.947 04:41:25 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:11.947 04:41:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.947 04:41:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.947 04:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.947 ************************************ 00:04:11.947 START TEST per_node_1G_alloc 00:04:11.947 ************************************ 00:04:11.947 04:41:25 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:11.947 04:41:25 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:11.947 04:41:25 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:11.947 04:41:25 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:11.947 04:41:25 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.947 04:41:25 -- setup/hugepages.sh@51 -- # shift 00:04:11.947 04:41:25 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:11.947 04:41:25 -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.947 04:41:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.947 04:41:25 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:11.947 04:41:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.947 04:41:25 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:11.947 04:41:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.947 04:41:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.947 04:41:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.947 04:41:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.947 04:41:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.947 04:41:25 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.947 04:41:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.947 04:41:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:11.947 04:41:25 -- setup/hugepages.sh@73 -- # return 0 00:04:11.947 04:41:25 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:11.947 04:41:25 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:11.947 04:41:25 -- setup/hugepages.sh@146 -- # setup output 00:04:11.947 04:41:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.947 04:41:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.947 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:11.947 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.210 04:41:26 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:12.210 04:41:26 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:12.210 04:41:26 -- setup/hugepages.sh@89 -- # local node 00:04:12.210 04:41:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.210 04:41:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.210 04:41:26 -- setup/hugepages.sh@92 -- # local surp 00:04:12.210 04:41:26 -- setup/hugepages.sh@93 -- # local resv 00:04:12.210 04:41:26 -- setup/hugepages.sh@94 -- # local anon 00:04:12.210 04:41:26 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:12.210 04:41:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.210 04:41:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.210 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:12.210 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.210 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.210 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.210 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.210 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.210 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.210 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 7948336 kB' 'MemAvailable: 10580304 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188264 kB' 'Inactive: 732632 kB' 'Active(anon): 95288 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2092976 kB' 'Inactive(file): 715944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 92896 kB' 'Mapped: 25244 kB' 'Shmem: 16896 kB' 'Slab: 171460 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 49984 kB' 'KernelStack: 3712 kB' 'PageTables: 8276 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626280 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.210 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.210 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.211 04:41:26 -- setup/common.sh@33 -- # echo 8192 00:04:12.211 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.211 04:41:26 -- setup/hugepages.sh@97 -- # anon=8192 00:04:12.211 04:41:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.211 04:41:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.211 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:12.211 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.211 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.211 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.211 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.211 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.211 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.211 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 7948332 kB' 'MemAvailable: 10580300 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188068 kB' 'Inactive: 732632 kB' 'Active(anon): 95092 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2092976 kB' 'Inactive(file): 715944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 92896 kB' 'Mapped: 25244 kB' 'Shmem: 16896 kB' 'Slab: 171460 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 49984 kB' 'KernelStack: 3712 kB' 'PageTables: 8276 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626280 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.211 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.211 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.212 04:41:26 -- setup/common.sh@33 -- # echo 0 00:04:12.212 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.212 04:41:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.212 04:41:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.212 04:41:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.212 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:12.212 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.212 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.212 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.212 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.212 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.212 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.212 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 7948464 kB' 'MemAvailable: 10580436 kB' 'Buffers: 2068 kB' 'Cached: 2823752 kB' 'SwapCached: 0 kB' 'Active: 2187748 kB' 'Inactive: 732636 kB' 'Active(anon): 94772 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2092976 kB' 'Inactive(file): 715948 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 20 kB' 'Writeback: 0 kB' 'AnonPages: 92716 kB' 'Mapped: 25184 kB' 'Shmem: 16896 kB' 'Slab: 171468 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 49992 kB' 'KernelStack: 3680 kB' 'PageTables: 8204 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626280 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.212 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.212 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.213 04:41:26 -- setup/common.sh@33 -- # echo 0 00:04:12.213 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.213 nr_hugepages=512 00:04:12.213 resv_hugepages=0 00:04:12.213 surplus_hugepages=0 00:04:12.213 anon_hugepages=8192 00:04:12.213 04:41:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.213 04:41:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:12.213 04:41:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.213 04:41:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.213 04:41:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:12.213 04:41:26 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:12.213 04:41:26 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:12.213 04:41:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.213 04:41:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.213 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:12.213 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.213 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.213 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.213 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.213 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.213 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.213 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 7948876 kB' 'MemAvailable: 10580848 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2186880 kB' 'Inactive: 732628 kB' 'Active(anon): 93900 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092980 kB' 'Inactive(file): 715944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 93712 kB' 'Mapped: 25312 kB' 'Shmem: 16892 kB' 'Slab: 171536 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50060 kB' 'KernelStack: 3808 kB' 'PageTables: 7960 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626280 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.213 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.213 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.214 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.214 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.214 04:41:26 -- setup/common.sh@33 -- # echo 512 00:04:12.214 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.214 04:41:26 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:12.214 04:41:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.214 04:41:26 -- setup/hugepages.sh@27 -- # local node 00:04:12.214 04:41:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.215 04:41:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.215 04:41:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.215 04:41:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.215 04:41:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.215 04:41:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.215 04:41:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.215 04:41:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.215 04:41:26 -- setup/common.sh@18 -- # local node=0 00:04:12.215 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.215 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.215 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.215 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.215 04:41:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.215 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.215 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 7949248 kB' 'MemUsed: 4351892 kB' 'Active: 2186796 kB' 'Inactive: 732624 kB' 'Active(anon): 93812 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2825816 kB' 'Mapped: 25236 kB' 'AnonPages: 93544 kB' 'Shmem: 16892 kB' 'KernelStack: 3760 kB' 'PageTables: 7844 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.215 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.215 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.215 04:41:26 -- setup/common.sh@33 -- # echo 0 00:04:12.215 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.215 node0=512 expecting 512 00:04:12.215 ************************************ 00:04:12.215 END TEST per_node_1G_alloc 00:04:12.215 ************************************ 00:04:12.215 04:41:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.215 04:41:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.215 04:41:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.215 04:41:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.215 04:41:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:12.215 04:41:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:12.215 00:04:12.215 real 0m0.299s 00:04:12.215 user 0m0.148s 00:04:12.215 sys 0m0.184s 00:04:12.215 04:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.216 04:41:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.216 04:41:26 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:12.216 04:41:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.216 04:41:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.216 04:41:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.216 ************************************ 00:04:12.216 START TEST even_2G_alloc 00:04:12.216 ************************************ 00:04:12.216 04:41:26 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:12.216 04:41:26 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:12.216 04:41:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:12.216 04:41:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.216 04:41:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.216 04:41:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:12.216 04:41:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.216 04:41:26 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:12.216 04:41:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.216 04:41:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:12.216 04:41:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.216 04:41:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.216 04:41:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.216 04:41:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.216 04:41:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.216 04:41:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.216 04:41:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:12.216 04:41:26 -- setup/hugepages.sh@83 -- # : 0 00:04:12.216 04:41:26 -- setup/hugepages.sh@84 -- # : 0 00:04:12.216 04:41:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.216 04:41:26 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:12.216 04:41:26 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:12.216 04:41:26 -- setup/hugepages.sh@153 -- # setup output 00:04:12.216 04:41:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.216 04:41:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:12.478 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.478 04:41:26 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:12.478 04:41:26 -- setup/hugepages.sh@89 -- # local node 00:04:12.478 04:41:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.478 04:41:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.478 04:41:26 -- setup/hugepages.sh@92 -- # local surp 00:04:12.478 04:41:26 -- setup/hugepages.sh@93 -- # local resv 00:04:12.478 04:41:26 -- setup/hugepages.sh@94 -- # local anon 00:04:12.478 04:41:26 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:12.478 04:41:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.478 04:41:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.478 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:12.478 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.478 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.478 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.478 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.478 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.478 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.478 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.478 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.478 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898452 kB' 'MemAvailable: 9530424 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188028 kB' 'Inactive: 732624 kB' 'Active(anon): 95044 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94208 kB' 'Mapped: 25244 kB' 'Shmem: 16892 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'KernelStack: 3744 kB' 'PageTables: 8052 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.479 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.479 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.479 04:41:26 -- setup/common.sh@33 -- # echo 8192 00:04:12.479 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.479 04:41:26 -- setup/hugepages.sh@97 -- # anon=8192 00:04:12.479 04:41:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.479 04:41:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.479 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:12.479 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.479 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.479 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.479 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.479 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.480 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.480 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898396 kB' 'MemAvailable: 9530368 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2187768 kB' 'Inactive: 732624 kB' 'Active(anon): 94784 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94208 kB' 'Mapped: 25244 kB' 'Shmem: 16892 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'KernelStack: 3744 kB' 'PageTables: 8052 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.480 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.480 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.481 04:41:26 -- setup/common.sh@33 -- # echo 0 00:04:12.481 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.481 04:41:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.481 04:41:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.481 04:41:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.481 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:12.481 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.481 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.481 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.481 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.481 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.481 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.481 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898656 kB' 'MemAvailable: 9530628 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188028 kB' 'Inactive: 732624 kB' 'Active(anon): 95044 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94208 kB' 'Mapped: 25244 kB' 'Shmem: 16892 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'KernelStack: 3744 kB' 'PageTables: 8052 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.481 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.481 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.482 04:41:26 -- setup/common.sh@33 -- # echo 0 00:04:12.482 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.482 nr_hugepages=1024 00:04:12.482 resv_hugepages=0 00:04:12.482 surplus_hugepages=0 00:04:12.482 anon_hugepages=8192 00:04:12.482 04:41:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.482 04:41:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.482 04:41:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.482 04:41:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.482 04:41:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:12.482 04:41:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.482 04:41:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.482 04:41:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.482 04:41:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.482 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:12.482 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.482 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.482 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.482 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.482 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.482 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.482 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898604 kB' 'MemAvailable: 9530576 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188028 kB' 'Inactive: 732624 kB' 'Active(anon): 95044 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94208 kB' 'Mapped: 25244 kB' 'Shmem: 16892 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'KernelStack: 3744 kB' 'PageTables: 8052 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.482 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.482 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.483 04:41:26 -- setup/common.sh@33 -- # echo 1024 00:04:12.483 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.483 04:41:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.483 04:41:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.483 04:41:26 -- setup/hugepages.sh@27 -- # local node 00:04:12.483 04:41:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.483 04:41:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.483 04:41:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.483 04:41:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.483 04:41:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.483 04:41:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.483 04:41:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.483 04:41:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.483 04:41:26 -- setup/common.sh@18 -- # local node=0 00:04:12.483 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.483 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.483 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.483 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.483 04:41:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.483 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.483 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898508 kB' 'MemUsed: 5402632 kB' 'Active: 2188028 kB' 'Inactive: 732624 kB' 'Active(anon): 95044 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2825816 kB' 'Mapped: 25244 kB' 'AnonPages: 93820 kB' 'Shmem: 16892 kB' 'KernelStack: 3744 kB' 'PageTables: 7664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.483 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.483 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.484 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.484 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.484 04:41:26 -- setup/common.sh@33 -- # echo 0 00:04:12.484 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.484 04:41:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.484 04:41:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.484 04:41:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.484 04:41:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.484 node0=1024 expecting 1024 00:04:12.484 ************************************ 00:04:12.484 END TEST even_2G_alloc 00:04:12.484 ************************************ 00:04:12.484 04:41:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.484 04:41:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.484 00:04:12.484 real 0m0.314s 00:04:12.484 user 0m0.160s 00:04:12.484 sys 0m0.189s 00:04:12.484 04:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.484 04:41:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.484 04:41:26 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:12.484 04:41:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.484 04:41:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.484 04:41:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.743 ************************************ 00:04:12.743 START TEST odd_alloc 00:04:12.743 ************************************ 00:04:12.743 04:41:26 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:12.743 04:41:26 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:12.743 04:41:26 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:12.743 04:41:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.743 04:41:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.743 04:41:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:12.743 04:41:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.743 04:41:26 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:12.743 04:41:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.743 04:41:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:12.743 04:41:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.743 04:41:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.743 04:41:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.743 04:41:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.743 04:41:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.743 04:41:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.743 04:41:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:12.743 04:41:26 -- setup/hugepages.sh@83 -- # : 0 00:04:12.743 04:41:26 -- setup/hugepages.sh@84 -- # : 0 00:04:12.743 04:41:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.743 04:41:26 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:12.743 04:41:26 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:12.743 04:41:26 -- setup/hugepages.sh@160 -- # setup output 00:04:12.743 04:41:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.743 04:41:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:12.743 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.743 04:41:26 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:12.743 04:41:26 -- setup/hugepages.sh@89 -- # local node 00:04:12.743 04:41:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.743 04:41:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.743 04:41:26 -- setup/hugepages.sh@92 -- # local surp 00:04:12.743 04:41:26 -- setup/hugepages.sh@93 -- # local resv 00:04:12.743 04:41:26 -- setup/hugepages.sh@94 -- # local anon 00:04:12.743 04:41:26 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:12.743 04:41:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.743 04:41:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.743 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:12.743 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:12.743 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.743 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.743 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.743 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.743 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.743 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.743 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.743 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6896628 kB' 'MemAvailable: 9528600 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188876 kB' 'Inactive: 732624 kB' 'Active(anon): 95892 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 93820 kB' 'Mapped: 25632 kB' 'Shmem: 16892 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'KernelStack: 3744 kB' 'PageTables: 7664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100968 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # continue 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.744 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.744 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.744 04:41:26 -- setup/common.sh@33 -- # echo 8192 00:04:12.744 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:12.744 04:41:26 -- setup/hugepages.sh@97 -- # anon=8192 00:04:13.005 04:41:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.005 04:41:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.005 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:13.005 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:13.005 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.005 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.005 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.005 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.005 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.005 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6897084 kB' 'MemAvailable: 9529056 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188616 kB' 'Inactive: 732624 kB' 'Active(anon): 95632 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94208 kB' 'Mapped: 25632 kB' 'Shmem: 16892 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'KernelStack: 3744 kB' 'PageTables: 7664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100968 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 04:41:26 -- setup/common.sh@33 -- # echo 0 00:04:13.006 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:13.006 04:41:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.006 04:41:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.006 04:41:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.006 04:41:26 -- setup/common.sh@18 -- # local node= 00:04:13.006 04:41:26 -- setup/common.sh@19 -- # local var val 00:04:13.006 04:41:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.006 04:41:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.006 04:41:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.006 04:41:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.006 04:41:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.006 04:41:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6897084 kB' 'MemAvailable: 9529056 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188616 kB' 'Inactive: 732624 kB' 'Active(anon): 95632 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 93820 kB' 'Mapped: 25632 kB' 'Shmem: 16892 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'KernelStack: 3744 kB' 'PageTables: 7664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100968 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.006 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.006 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # continue 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 04:41:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 04:41:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 04:41:26 -- setup/common.sh@33 -- # echo 0 00:04:13.007 04:41:26 -- setup/common.sh@33 -- # return 0 00:04:13.007 nr_hugepages=1025 00:04:13.007 resv_hugepages=0 00:04:13.007 surplus_hugepages=0 00:04:13.007 04:41:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.007 04:41:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:13.007 04:41:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.007 04:41:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.007 anon_hugepages=8192 00:04:13.007 04:41:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:13.007 04:41:26 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.007 04:41:27 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:13.007 04:41:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.007 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.007 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.007 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.007 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.007 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.007 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.008 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.008 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.008 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6897604 kB' 'MemAvailable: 9529576 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188616 kB' 'Inactive: 732624 kB' 'Active(anon): 95632 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 93820 kB' 'Mapped: 25632 kB' 'Shmem: 16892 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'KernelStack: 3744 kB' 'PageTables: 7664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100968 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.008 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 04:41:27 -- setup/common.sh@33 -- # echo 1025 00:04:13.009 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.009 04:41:27 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.009 04:41:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.009 04:41:27 -- setup/hugepages.sh@27 -- # local node 00:04:13.009 04:41:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.009 04:41:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:13.009 04:41:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.009 04:41:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.009 04:41:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.009 04:41:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.009 04:41:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.009 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.009 04:41:27 -- setup/common.sh@18 -- # local node=0 00:04:13.009 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.009 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.009 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.009 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.009 04:41:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.009 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.009 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6897512 kB' 'MemUsed: 5403628 kB' 'Active: 2188616 kB' 'Inactive: 732624 kB' 'Active(anon): 95632 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2825816 kB' 'Mapped: 25632 kB' 'AnonPages: 93820 kB' 'Shmem: 16892 kB' 'KernelStack: 3744 kB' 'PageTables: 7664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171544 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50068 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.009 04:41:27 -- setup/common.sh@33 -- # echo 0 00:04:13.009 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.009 node0=1025 expecting 1025 00:04:13.009 04:41:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.009 04:41:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.009 04:41:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.009 04:41:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.009 04:41:27 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:13.009 04:41:27 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:13.009 00:04:13.009 real 0m0.315s 00:04:13.009 user 0m0.147s 00:04:13.009 sys 0m0.200s 00:04:13.009 04:41:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.009 ************************************ 00:04:13.009 END TEST odd_alloc 00:04:13.010 ************************************ 00:04:13.010 04:41:27 -- common/autotest_common.sh@10 -- # set +x 00:04:13.010 04:41:27 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:13.010 04:41:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.010 04:41:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.010 04:41:27 -- common/autotest_common.sh@10 -- # set +x 00:04:13.010 ************************************ 00:04:13.010 START TEST custom_alloc 00:04:13.010 ************************************ 00:04:13.010 04:41:27 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:13.010 04:41:27 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:13.010 04:41:27 -- setup/hugepages.sh@169 -- # local node 00:04:13.010 04:41:27 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:13.010 04:41:27 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:13.010 04:41:27 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:13.010 04:41:27 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:13.010 04:41:27 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:13.010 04:41:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.010 04:41:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.010 04:41:27 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:13.010 04:41:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.010 04:41:27 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:13.010 04:41:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.010 04:41:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:13.010 04:41:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.010 04:41:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.010 04:41:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.010 04:41:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.010 04:41:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.010 04:41:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.010 04:41:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.010 04:41:27 -- setup/hugepages.sh@83 -- # : 0 00:04:13.010 04:41:27 -- setup/hugepages.sh@84 -- # : 0 00:04:13.010 04:41:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.010 04:41:27 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:13.010 04:41:27 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:13.010 04:41:27 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:13.010 04:41:27 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:13.010 04:41:27 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:13.010 04:41:27 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:13.010 04:41:27 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:13.010 04:41:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.010 04:41:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:13.010 04:41:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.010 04:41:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.010 04:41:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.010 04:41:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.010 04:41:27 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:13.010 04:41:27 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.010 04:41:27 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:13.010 04:41:27 -- setup/hugepages.sh@78 -- # return 0 00:04:13.010 04:41:27 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:13.010 04:41:27 -- setup/hugepages.sh@187 -- # setup output 00:04:13.010 04:41:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.010 04:41:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:13.273 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.273 04:41:27 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:13.273 04:41:27 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:13.273 04:41:27 -- setup/hugepages.sh@89 -- # local node 00:04:13.273 04:41:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.273 04:41:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.273 04:41:27 -- setup/hugepages.sh@92 -- # local surp 00:04:13.273 04:41:27 -- setup/hugepages.sh@93 -- # local resv 00:04:13.273 04:41:27 -- setup/hugepages.sh@94 -- # local anon 00:04:13.273 04:41:27 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:13.273 04:41:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.273 04:41:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.273 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.273 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.273 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.273 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.273 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.273 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.273 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.273 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 7947032 kB' 'MemAvailable: 10579004 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188128 kB' 'Inactive: 732624 kB' 'Active(anon): 95144 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94696 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 7784 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626280 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.273 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.273 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.274 04:41:27 -- setup/common.sh@33 -- # echo 8192 00:04:13.274 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.274 04:41:27 -- setup/hugepages.sh@97 -- # anon=8192 00:04:13.274 04:41:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.274 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.274 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.274 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.274 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.274 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.274 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.274 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.274 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.274 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 7947292 kB' 'MemAvailable: 10579264 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188128 kB' 'Inactive: 732624 kB' 'Active(anon): 95144 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94696 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 7784 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626280 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.274 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.274 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.275 04:41:27 -- setup/common.sh@33 -- # echo 0 00:04:13.275 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.275 04:41:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.275 04:41:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.275 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.275 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.275 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.275 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.275 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.275 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.275 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.275 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.275 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 7947260 kB' 'MemAvailable: 10579232 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188128 kB' 'Inactive: 732624 kB' 'Active(anon): 95144 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94696 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626280 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.275 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.275 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.276 04:41:27 -- setup/common.sh@33 -- # echo 0 00:04:13.276 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.276 nr_hugepages=512 00:04:13.276 resv_hugepages=0 00:04:13.276 surplus_hugepages=0 00:04:13.276 anon_hugepages=8192 00:04:13.276 04:41:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.276 04:41:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:13.276 04:41:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.276 04:41:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.276 04:41:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:13.276 04:41:27 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:13.276 04:41:27 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:13.276 04:41:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.276 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.276 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.276 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.276 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.276 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.276 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.276 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.276 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.276 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 7947716 kB' 'MemAvailable: 10579688 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188128 kB' 'Inactive: 732624 kB' 'Active(anon): 95144 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94696 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626280 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.276 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.276 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.277 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.277 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.278 04:41:27 -- setup/common.sh@33 -- # echo 512 00:04:13.278 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.278 04:41:27 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:13.278 04:41:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.278 04:41:27 -- setup/hugepages.sh@27 -- # local node 00:04:13.278 04:41:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.278 04:41:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.278 04:41:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.278 04:41:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.278 04:41:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.278 04:41:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.278 04:41:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.278 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.278 04:41:27 -- setup/common.sh@18 -- # local node=0 00:04:13.278 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.278 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.278 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.278 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.278 04:41:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.278 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.278 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 7947716 kB' 'MemUsed: 4353424 kB' 'Active: 2188128 kB' 'Inactive: 732624 kB' 'Active(anon): 95144 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2825816 kB' 'Mapped: 25296 kB' 'AnonPages: 94696 kB' 'Shmem: 16892 kB' 'KernelStack: 3744 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.278 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.278 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.278 04:41:27 -- setup/common.sh@33 -- # echo 0 00:04:13.278 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.278 node0=512 expecting 512 00:04:13.278 ************************************ 00:04:13.278 END TEST custom_alloc 00:04:13.278 ************************************ 00:04:13.278 04:41:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.278 04:41:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.278 04:41:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.278 04:41:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.278 04:41:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.278 04:41:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.278 00:04:13.278 real 0m0.296s 00:04:13.278 user 0m0.154s 00:04:13.278 sys 0m0.173s 00:04:13.278 04:41:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.278 04:41:27 -- common/autotest_common.sh@10 -- # set +x 00:04:13.278 04:41:27 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:13.278 04:41:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.278 04:41:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.278 04:41:27 -- common/autotest_common.sh@10 -- # set +x 00:04:13.278 ************************************ 00:04:13.278 START TEST no_shrink_alloc 00:04:13.278 ************************************ 00:04:13.279 04:41:27 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:13.279 04:41:27 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:13.279 04:41:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.279 04:41:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:13.279 04:41:27 -- setup/hugepages.sh@51 -- # shift 00:04:13.279 04:41:27 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:13.279 04:41:27 -- setup/hugepages.sh@52 -- # local node_ids 00:04:13.279 04:41:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.279 04:41:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.279 04:41:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:13.279 04:41:27 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:13.279 04:41:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.279 04:41:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.279 04:41:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.279 04:41:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.279 04:41:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.279 04:41:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:13.279 04:41:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.279 04:41:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:13.279 04:41:27 -- setup/hugepages.sh@73 -- # return 0 00:04:13.279 04:41:27 -- setup/hugepages.sh@198 -- # setup output 00:04:13.279 04:41:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.279 04:41:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:13.540 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.540 04:41:27 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:13.540 04:41:27 -- setup/hugepages.sh@89 -- # local node 00:04:13.540 04:41:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.540 04:41:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.540 04:41:27 -- setup/hugepages.sh@92 -- # local surp 00:04:13.540 04:41:27 -- setup/hugepages.sh@93 -- # local resv 00:04:13.541 04:41:27 -- setup/hugepages.sh@94 -- # local anon 00:04:13.541 04:41:27 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:13.541 04:41:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.541 04:41:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.541 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.541 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.541 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.541 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.541 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.541 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.541 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.541 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898184 kB' 'MemAvailable: 9530156 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188856 kB' 'Inactive: 732624 kB' 'Active(anon): 95872 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 91588 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 7784 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.541 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.541 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.542 04:41:27 -- setup/common.sh@33 -- # echo 8192 00:04:13.542 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.542 04:41:27 -- setup/hugepages.sh@97 -- # anon=8192 00:04:13.542 04:41:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.542 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.542 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.542 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.542 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.542 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.542 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.542 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.542 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.542 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898176 kB' 'MemAvailable: 9530148 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188856 kB' 'Inactive: 732624 kB' 'Active(anon): 95872 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 91976 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.542 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.542 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.543 04:41:27 -- setup/common.sh@33 -- # echo 0 00:04:13.543 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.543 04:41:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.543 04:41:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.543 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.543 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.543 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.543 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.543 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.543 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.543 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.543 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.543 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898436 kB' 'MemAvailable: 9530408 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188856 kB' 'Inactive: 732624 kB' 'Active(anon): 95872 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 91588 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.543 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.543 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.544 04:41:27 -- setup/common.sh@33 -- # echo 0 00:04:13.544 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.544 nr_hugepages=1024 00:04:13.544 resv_hugepages=0 00:04:13.544 surplus_hugepages=0 00:04:13.544 anon_hugepages=8192 00:04:13.544 04:41:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.544 04:41:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.544 04:41:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.544 04:41:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.544 04:41:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:13.544 04:41:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.544 04:41:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.544 04:41:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.544 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.544 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.544 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.544 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.544 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.544 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.544 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.544 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.544 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898088 kB' 'MemAvailable: 9530060 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2188856 kB' 'Inactive: 732624 kB' 'Active(anon): 95872 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 91976 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.544 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.544 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.545 04:41:27 -- setup/common.sh@33 -- # echo 1024 00:04:13.545 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.545 04:41:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.545 04:41:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.545 04:41:27 -- setup/hugepages.sh@27 -- # local node 00:04:13.545 04:41:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.545 04:41:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.545 04:41:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.545 04:41:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.545 04:41:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.545 04:41:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.545 04:41:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.545 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.545 04:41:27 -- setup/common.sh@18 -- # local node=0 00:04:13.545 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.545 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.545 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.545 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.545 04:41:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.545 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.545 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898348 kB' 'MemUsed: 5402792 kB' 'Active: 2188856 kB' 'Inactive: 732624 kB' 'Active(anon): 95872 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2825816 kB' 'Mapped: 25296 kB' 'AnonPages: 91976 kB' 'Shmem: 16892 kB' 'KernelStack: 3744 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.545 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.545 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.546 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.546 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.546 04:41:27 -- setup/common.sh@33 -- # echo 0 00:04:13.546 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.546 node0=1024 expecting 1024 00:04:13.546 04:41:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.546 04:41:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.546 04:41:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.546 04:41:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.546 04:41:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.546 04:41:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.546 04:41:27 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:13.546 04:41:27 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:13.546 04:41:27 -- setup/hugepages.sh@202 -- # setup output 00:04:13.546 04:41:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.546 04:41:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.808 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:13.808 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.808 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:13.808 04:41:27 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:13.808 04:41:27 -- setup/hugepages.sh@89 -- # local node 00:04:13.808 04:41:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.808 04:41:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.808 04:41:27 -- setup/hugepages.sh@92 -- # local surp 00:04:13.808 04:41:27 -- setup/hugepages.sh@93 -- # local resv 00:04:13.808 04:41:27 -- setup/hugepages.sh@94 -- # local anon 00:04:13.808 04:41:27 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:13.808 04:41:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.808 04:41:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.808 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.808 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.808 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.808 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.808 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.808 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.808 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.808 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6898936 kB' 'MemAvailable: 9530908 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2187748 kB' 'Inactive: 732624 kB' 'Active(anon): 94764 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 92268 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.808 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.808 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.809 04:41:27 -- setup/common.sh@33 -- # echo 8192 00:04:13.809 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.809 04:41:27 -- setup/hugepages.sh@97 -- # anon=8192 00:04:13.809 04:41:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.809 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.809 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.809 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.809 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.809 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.809 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.809 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.809 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.809 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6899132 kB' 'MemAvailable: 9531104 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2187748 kB' 'Inactive: 732624 kB' 'Active(anon): 94764 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 92268 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.809 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.809 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.810 04:41:27 -- setup/common.sh@33 -- # echo 0 00:04:13.810 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.810 04:41:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.810 04:41:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.810 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.810 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.810 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.810 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.810 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.810 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.810 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.810 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.810 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6899296 kB' 'MemAvailable: 9531268 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2187748 kB' 'Inactive: 732624 kB' 'Active(anon): 94764 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 92656 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.810 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.810 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.811 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.811 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.811 04:41:27 -- setup/common.sh@33 -- # echo 0 00:04:13.811 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.811 nr_hugepages=1024 00:04:13.811 resv_hugepages=0 00:04:13.811 surplus_hugepages=0 00:04:13.811 anon_hugepages=8192 00:04:13.811 04:41:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.811 04:41:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.811 04:41:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.811 04:41:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.811 04:41:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:13.811 04:41:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.811 04:41:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.811 04:41:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.811 04:41:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.811 04:41:27 -- setup/common.sh@18 -- # local node= 00:04:13.811 04:41:27 -- setup/common.sh@19 -- # local var val 00:04:13.811 04:41:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.811 04:41:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.811 04:41:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.811 04:41:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.812 04:41:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.812 04:41:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6899556 kB' 'MemAvailable: 9531528 kB' 'Buffers: 2068 kB' 'Cached: 2823748 kB' 'SwapCached: 0 kB' 'Active: 2187748 kB' 'Inactive: 732624 kB' 'Active(anon): 94764 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 92656 kB' 'Mapped: 25296 kB' 'Shmem: 16892 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'KernelStack: 3744 kB' 'PageTables: 7784 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101992 kB' 'Committed_AS: 343508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359691772 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 94060 kB' 'DirectMap2M: 5148672 kB' 'DirectMap1G: 9437184 kB' 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.812 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.812 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.813 04:41:27 -- setup/common.sh@33 -- # echo 1024 00:04:13.813 04:41:27 -- setup/common.sh@33 -- # return 0 00:04:13.813 04:41:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.813 04:41:28 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.813 04:41:28 -- setup/hugepages.sh@27 -- # local node 00:04:13.813 04:41:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.813 04:41:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.813 04:41:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.813 04:41:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.813 04:41:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.813 04:41:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.813 04:41:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.813 04:41:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.813 04:41:28 -- setup/common.sh@18 -- # local node=0 00:04:13.813 04:41:28 -- setup/common.sh@19 -- # local var val 00:04:13.813 04:41:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.813 04:41:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.813 04:41:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.813 04:41:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.813 04:41:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.813 04:41:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301140 kB' 'MemFree: 6899816 kB' 'MemUsed: 5401324 kB' 'Active: 2187488 kB' 'Inactive: 732624 kB' 'Active(anon): 94504 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2092984 kB' 'Inactive(file): 715940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2825816 kB' 'Mapped: 25296 kB' 'AnonPages: 92268 kB' 'Shmem: 16892 kB' 'KernelStack: 3744 kB' 'PageTables: 7784 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171668 kB' 'SReclaimable: 121476 kB' 'SUnreclaim: 50192 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.813 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.813 04:41:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.814 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.814 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.814 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.814 04:41:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.814 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.814 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.814 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.814 04:41:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.814 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.814 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.814 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.814 04:41:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.814 04:41:28 -- setup/common.sh@32 -- # continue 00:04:13.814 04:41:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.814 04:41:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.814 04:41:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.814 04:41:28 -- setup/common.sh@33 -- # echo 0 00:04:13.814 04:41:28 -- setup/common.sh@33 -- # return 0 00:04:13.814 node0=1024 expecting 1024 00:04:13.814 ************************************ 00:04:13.814 END TEST no_shrink_alloc 00:04:13.814 ************************************ 00:04:13.814 04:41:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.814 04:41:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.814 04:41:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.814 04:41:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.814 04:41:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.814 04:41:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.814 00:04:13.814 real 0m0.586s 00:04:13.814 user 0m0.306s 00:04:13.814 sys 0m0.343s 00:04:13.814 04:41:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.814 04:41:28 -- common/autotest_common.sh@10 -- # set +x 00:04:14.073 04:41:28 -- setup/hugepages.sh@217 -- # clear_hp 00:04:14.073 04:41:28 -- setup/hugepages.sh@37 -- # local node hp 00:04:14.073 04:41:28 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.073 04:41:28 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.073 04:41:28 -- setup/hugepages.sh@41 -- # echo 0 00:04:14.073 04:41:28 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.073 04:41:28 -- setup/hugepages.sh@41 -- # echo 0 00:04:14.073 04:41:28 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:14.073 04:41:28 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:14.073 ************************************ 00:04:14.073 END TEST hugepages 00:04:14.073 ************************************ 00:04:14.073 00:04:14.073 real 0m2.819s 00:04:14.073 user 0m1.290s 00:04:14.073 sys 0m1.701s 00:04:14.073 04:41:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.073 04:41:28 -- common/autotest_common.sh@10 -- # set +x 00:04:14.073 04:41:28 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:14.073 04:41:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.073 04:41:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.073 04:41:28 -- common/autotest_common.sh@10 -- # set +x 00:04:14.073 ************************************ 00:04:14.073 START TEST driver 00:04:14.073 ************************************ 00:04:14.073 04:41:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:14.073 * Looking for test storage... 00:04:14.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.073 04:41:28 -- setup/driver.sh@68 -- # setup reset 00:04:14.073 04:41:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.073 04:41:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.641 04:41:28 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:14.641 04:41:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.641 04:41:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.641 04:41:28 -- common/autotest_common.sh@10 -- # set +x 00:04:14.641 ************************************ 00:04:14.641 START TEST guess_driver 00:04:14.641 ************************************ 00:04:14.641 04:41:28 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:14.641 04:41:28 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:14.641 04:41:28 -- setup/driver.sh@47 -- # local fail=0 00:04:14.641 04:41:28 -- setup/driver.sh@49 -- # pick_driver 00:04:14.641 04:41:28 -- setup/driver.sh@36 -- # vfio 00:04:14.641 04:41:28 -- setup/driver.sh@21 -- # local iommu_grups 00:04:14.641 04:41:28 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:14.641 04:41:28 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:14.641 04:41:28 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:14.641 04:41:28 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:14.641 04:41:28 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:14.641 04:41:28 -- setup/driver.sh@32 -- # return 1 00:04:14.641 04:41:28 -- setup/driver.sh@38 -- # uio 00:04:14.641 04:41:28 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:14.641 04:41:28 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:14.641 04:41:28 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:14.641 04:41:28 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:14.641 04:41:28 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/3.10.0-1160.114.2.el7.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:14.641 insmod /lib/modules/3.10.0-1160.114.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:14.641 04:41:28 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:14.641 Looking for driver=uio_pci_generic 00:04:14.641 04:41:28 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:14.641 04:41:28 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:14.641 04:41:28 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:14.641 04:41:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.641 04:41:28 -- setup/driver.sh@45 -- # setup output config 00:04:14.641 04:41:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.641 04:41:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.641 04:41:28 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:14.641 04:41:28 -- setup/driver.sh@58 -- # continue 00:04:14.641 04:41:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.900 04:41:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.900 04:41:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:14.900 04:41:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.900 04:41:29 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:14.900 04:41:29 -- setup/driver.sh@65 -- # setup reset 00:04:14.900 04:41:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.900 04:41:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.468 ************************************ 00:04:15.468 END TEST guess_driver 00:04:15.468 ************************************ 00:04:15.468 00:04:15.468 real 0m0.813s 00:04:15.468 user 0m0.277s 00:04:15.468 sys 0m0.519s 00:04:15.468 04:41:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.468 04:41:29 -- common/autotest_common.sh@10 -- # set +x 00:04:15.468 ************************************ 00:04:15.468 END TEST driver 00:04:15.468 ************************************ 00:04:15.468 00:04:15.468 real 0m1.312s 00:04:15.468 user 0m0.476s 00:04:15.468 sys 0m0.829s 00:04:15.468 04:41:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.468 04:41:29 -- common/autotest_common.sh@10 -- # set +x 00:04:15.468 04:41:29 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:15.468 04:41:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:15.468 04:41:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.468 04:41:29 -- common/autotest_common.sh@10 -- # set +x 00:04:15.468 ************************************ 00:04:15.468 START TEST devices 00:04:15.468 ************************************ 00:04:15.468 04:41:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:15.468 * Looking for test storage... 00:04:15.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:15.468 04:41:29 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:15.468 04:41:29 -- setup/devices.sh@192 -- # setup reset 00:04:15.468 04:41:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.468 04:41:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.727 04:41:29 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:15.727 04:41:29 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:15.727 04:41:29 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:15.727 04:41:29 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:15.727 04:41:29 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:15.727 04:41:29 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:15.727 04:41:29 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:15.727 04:41:29 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.727 04:41:29 -- common/autotest_common.sh@1649 -- # return 1 00:04:15.727 04:41:29 -- setup/devices.sh@196 -- # blocks=() 00:04:15.727 04:41:29 -- setup/devices.sh@196 -- # declare -a blocks 00:04:15.727 04:41:29 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:15.727 04:41:29 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:15.727 04:41:29 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:15.727 04:41:29 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.727 04:41:29 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:15.727 04:41:29 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:15.727 04:41:29 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:15.727 04:41:29 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:15.727 04:41:29 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:15.727 04:41:29 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:15.727 04:41:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:15.987 No valid GPT data, bailing 00:04:15.987 04:41:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.987 04:41:29 -- scripts/common.sh@393 -- # pt= 00:04:15.987 04:41:29 -- scripts/common.sh@394 -- # return 1 00:04:15.987 04:41:29 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:15.987 04:41:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:15.987 04:41:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:15.987 04:41:29 -- setup/common.sh@80 -- # echo 5368709120 00:04:15.987 04:41:29 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:15.987 04:41:29 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.987 04:41:29 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:15.987 04:41:29 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:15.987 04:41:29 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:15.987 04:41:29 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:15.987 04:41:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:15.987 04:41:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.987 04:41:30 -- common/autotest_common.sh@10 -- # set +x 00:04:15.987 ************************************ 00:04:15.987 START TEST nvme_mount 00:04:15.987 ************************************ 00:04:15.987 04:41:30 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:15.987 04:41:30 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:15.987 04:41:30 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:15.987 04:41:30 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.987 04:41:30 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:15.987 04:41:30 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:15.987 04:41:30 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:15.987 04:41:30 -- setup/common.sh@40 -- # local part_no=1 00:04:15.987 04:41:30 -- setup/common.sh@41 -- # local size=1073741824 00:04:15.987 04:41:30 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.987 04:41:30 -- setup/common.sh@44 -- # parts=() 00:04:15.987 04:41:30 -- setup/common.sh@44 -- # local parts 00:04:15.987 04:41:30 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.987 04:41:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.987 04:41:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.987 04:41:30 -- setup/common.sh@46 -- # (( part++ )) 00:04:15.987 04:41:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.987 04:41:30 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:15.987 04:41:30 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:15.987 04:41:30 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:16.924 Creating new GPT entries. 00:04:16.924 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:16.924 other utilities. 00:04:16.924 04:41:31 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:16.924 04:41:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.924 04:41:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.924 04:41:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.924 04:41:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:17.872 Creating new GPT entries. 00:04:17.872 The operation has completed successfully. 00:04:17.872 04:41:32 -- setup/common.sh@57 -- # (( part++ )) 00:04:17.872 04:41:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.872 04:41:32 -- setup/common.sh@62 -- # wait 34404 00:04:18.153 04:41:32 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.153 04:41:32 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:18.153 04:41:32 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.153 04:41:32 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:18.153 04:41:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:18.153 04:41:32 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.153 04:41:32 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.153 04:41:32 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:18.153 04:41:32 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:18.153 04:41:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.153 04:41:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.153 04:41:32 -- setup/devices.sh@53 -- # local found=0 00:04:18.153 04:41:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.153 04:41:32 -- setup/devices.sh@56 -- # : 00:04:18.153 04:41:32 -- setup/devices.sh@59 -- # local pci status 00:04:18.153 04:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.153 04:41:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:18.153 04:41:32 -- setup/devices.sh@47 -- # setup output config 00:04:18.153 04:41:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.153 04:41:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.413 04:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.413 04:41:32 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:18.413 04:41:32 -- setup/devices.sh@63 -- # found=1 00:04:18.413 04:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.413 04:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.413 04:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.413 04:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.413 04:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.413 04:41:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.413 04:41:32 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:18.413 04:41:32 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.413 04:41:32 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.413 04:41:32 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.413 04:41:32 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:18.413 04:41:32 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.413 04:41:32 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.413 04:41:32 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.413 04:41:32 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.413 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.413 04:41:32 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.413 04:41:32 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.672 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:18.672 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:18.672 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.672 /dev/nvme0n1: calling ioclt to re-read partition table: Success 00:04:18.672 04:41:32 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:18.672 04:41:32 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:18.672 04:41:32 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.672 04:41:32 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:18.672 04:41:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:18.672 04:41:32 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.672 04:41:32 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.672 04:41:32 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:18.672 04:41:32 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:18.672 04:41:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.672 04:41:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.672 04:41:32 -- setup/devices.sh@53 -- # local found=0 00:04:18.672 04:41:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.672 04:41:32 -- setup/devices.sh@56 -- # : 00:04:18.672 04:41:32 -- setup/devices.sh@59 -- # local pci status 00:04:18.672 04:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.672 04:41:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:18.672 04:41:32 -- setup/devices.sh@47 -- # setup output config 00:04:18.672 04:41:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.672 04:41:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.931 04:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.931 04:41:32 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:18.931 04:41:32 -- setup/devices.sh@63 -- # found=1 00:04:18.931 04:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.931 04:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.931 04:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.931 04:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.931 04:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.932 04:41:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.932 04:41:33 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:18.932 04:41:33 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.932 04:41:33 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.932 04:41:33 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.932 04:41:33 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.932 04:41:33 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:18.932 04:41:33 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:18.932 04:41:33 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:18.932 04:41:33 -- setup/devices.sh@50 -- # local mount_point= 00:04:18.932 04:41:33 -- setup/devices.sh@51 -- # local test_file= 00:04:18.932 04:41:33 -- setup/devices.sh@53 -- # local found=0 00:04:18.932 04:41:33 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:18.932 04:41:33 -- setup/devices.sh@59 -- # local pci status 00:04:18.932 04:41:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.932 04:41:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:18.932 04:41:33 -- setup/devices.sh@47 -- # setup output config 00:04:18.932 04:41:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.932 04:41:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.190 04:41:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.190 04:41:33 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:19.190 04:41:33 -- setup/devices.sh@63 -- # found=1 00:04:19.190 04:41:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.190 04:41:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.190 04:41:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.190 04:41:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.190 04:41:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.449 04:41:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.449 04:41:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:19.449 04:41:33 -- setup/devices.sh@68 -- # return 0 00:04:19.449 04:41:33 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:19.449 04:41:33 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.449 04:41:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.449 04:41:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.449 04:41:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:19.449 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:19.449 00:04:19.449 real 0m3.434s 00:04:19.449 user 0m0.456s 00:04:19.449 sys 0m0.826s 00:04:19.449 04:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.449 04:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.449 ************************************ 00:04:19.449 END TEST nvme_mount 00:04:19.449 ************************************ 00:04:19.449 04:41:33 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:19.449 04:41:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.449 04:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.449 04:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.449 ************************************ 00:04:19.449 START TEST dm_mount 00:04:19.449 ************************************ 00:04:19.449 04:41:33 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:19.449 04:41:33 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:19.449 04:41:33 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:19.449 04:41:33 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:19.449 04:41:33 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:19.449 04:41:33 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.449 04:41:33 -- setup/common.sh@40 -- # local part_no=2 00:04:19.450 04:41:33 -- setup/common.sh@41 -- # local size=1073741824 00:04:19.450 04:41:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.450 04:41:33 -- setup/common.sh@44 -- # parts=() 00:04:19.450 04:41:33 -- setup/common.sh@44 -- # local parts 00:04:19.450 04:41:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.450 04:41:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.450 04:41:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.450 04:41:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:19.450 04:41:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.450 04:41:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.450 04:41:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:19.450 04:41:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.450 04:41:33 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:19.450 04:41:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:19.450 04:41:33 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:20.388 Creating new GPT entries. 00:04:20.388 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.388 other utilities. 00:04:20.388 04:41:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.388 04:41:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.388 04:41:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.388 04:41:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.388 04:41:34 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:21.766 Creating new GPT entries. 00:04:21.766 The operation has completed successfully. 00:04:21.766 04:41:35 -- setup/common.sh@57 -- # (( part++ )) 00:04:21.766 04:41:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.766 04:41:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:21.766 04:41:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:21.766 04:41:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:22.704 The operation has completed successfully. 00:04:22.704 04:41:36 -- setup/common.sh@57 -- # (( part++ )) 00:04:22.704 04:41:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.704 04:41:36 -- setup/common.sh@62 -- # wait 34732 00:04:22.704 04:41:36 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:22.704 04:41:36 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.704 04:41:36 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:22.704 04:41:36 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:22.704 04:41:36 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:22.704 04:41:36 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:22.704 04:41:36 -- setup/devices.sh@161 -- # break 00:04:22.704 04:41:36 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:22.704 04:41:36 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:22.704 04:41:36 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:22.704 04:41:36 -- setup/devices.sh@166 -- # dm=dm-0 00:04:22.704 04:41:36 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:22.704 04:41:36 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:22.704 04:41:36 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.704 04:41:36 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:22.704 04:41:36 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.704 04:41:36 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:22.704 04:41:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:22.704 04:41:36 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.704 04:41:36 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:22.704 04:41:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:22.704 04:41:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:22.704 04:41:36 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.704 04:41:36 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:22.704 04:41:36 -- setup/devices.sh@53 -- # local found=0 00:04:22.704 04:41:36 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.704 04:41:36 -- setup/devices.sh@56 -- # : 00:04:22.704 04:41:36 -- setup/devices.sh@59 -- # local pci status 00:04:22.704 04:41:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.704 04:41:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:22.704 04:41:36 -- setup/devices.sh@47 -- # setup output config 00:04:22.704 04:41:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.704 04:41:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.963 04:41:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:22.963 04:41:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:22.963 04:41:36 -- setup/devices.sh@63 -- # found=1 00:04:22.963 04:41:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.963 04:41:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:22.963 04:41:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.963 04:41:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:22.963 04:41:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.963 04:41:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.963 04:41:37 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:22.963 04:41:37 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.963 04:41:37 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.963 04:41:37 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:22.963 04:41:37 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.963 04:41:37 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:22.963 04:41:37 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:22.963 04:41:37 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:22.963 04:41:37 -- setup/devices.sh@50 -- # local mount_point= 00:04:22.963 04:41:37 -- setup/devices.sh@51 -- # local test_file= 00:04:22.963 04:41:37 -- setup/devices.sh@53 -- # local found=0 00:04:22.963 04:41:37 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.963 04:41:37 -- setup/devices.sh@59 -- # local pci status 00:04:22.963 04:41:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.963 04:41:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:22.963 04:41:37 -- setup/devices.sh@47 -- # setup output config 00:04:22.963 04:41:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.963 04:41:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.222 04:41:37 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:23.222 04:41:37 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:23.222 04:41:37 -- setup/devices.sh@63 -- # found=1 00:04:23.222 04:41:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.222 04:41:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:23.222 04:41:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.222 04:41:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:23.222 04:41:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.222 04:41:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.222 04:41:37 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:23.222 04:41:37 -- setup/devices.sh@68 -- # return 0 00:04:23.222 04:41:37 -- setup/devices.sh@187 -- # cleanup_dm 00:04:23.222 04:41:37 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.222 04:41:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:23.222 04:41:37 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:23.482 04:41:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.482 04:41:37 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:23.482 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.482 04:41:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:23.482 04:41:37 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:23.482 00:04:23.482 real 0m4.005s 00:04:23.482 user 0m0.315s 00:04:23.482 sys 0m0.603s 00:04:23.482 ************************************ 00:04:23.482 END TEST dm_mount 00:04:23.482 ************************************ 00:04:23.482 04:41:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.482 04:41:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.482 04:41:37 -- setup/devices.sh@1 -- # cleanup 00:04:23.482 04:41:37 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:23.482 04:41:37 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.482 04:41:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.482 04:41:37 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:23.482 04:41:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.482 04:41:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.482 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:23.482 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:23.482 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:23.482 /dev/nvme0n1: calling ioclt to re-read partition table: Success 00:04:23.482 04:41:37 -- setup/devices.sh@12 -- # cleanup_dm 00:04:23.482 04:41:37 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.482 04:41:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:23.482 04:41:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.482 04:41:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:23.482 04:41:37 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.482 04:41:37 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:23.482 00:04:23.482 real 0m8.121s 00:04:23.482 user 0m1.080s 00:04:23.482 sys 0m1.797s 00:04:23.482 ************************************ 00:04:23.482 END TEST devices 00:04:23.482 ************************************ 00:04:23.482 04:41:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.482 04:41:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.482 00:04:23.482 real 0m14.861s 00:04:23.482 user 0m3.865s 00:04:23.482 sys 0m5.991s 00:04:23.482 04:41:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.482 ************************************ 00:04:23.482 END TEST setup.sh 00:04:23.482 ************************************ 00:04:23.482 04:41:37 -- common/autotest_common.sh@10 -- # set +x 00:04:23.482 04:41:37 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:23.741 Hugepages 00:04:23.741 node hugesize free / total 00:04:23.741 node0 1048576kB 0 / 0 00:04:23.741 node0 2048kB 2048 / 2048 00:04:23.741 00:04:23.741 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.741 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:23.741 NVMe 0000:00:06.0 1b36 0010 0 nvme nvme0 nvme0n1 00:04:23.741 04:41:37 -- spdk/autotest.sh@141 -- # uname -s 00:04:23.742 04:41:37 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:23.742 04:41:37 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:23.742 04:41:37 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:24.260 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.260 04:41:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:25.198 04:41:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:25.198 04:41:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:25.198 04:41:39 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:25.198 04:41:39 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:25.198 04:41:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:25.198 04:41:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:25.198 04:41:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.198 04:41:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:25.198 04:41:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:25.457 04:41:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:25.457 04:41:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:25.457 04:41:39 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:25.457 Waiting for block devices as requested 00:04:25.716 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:25.716 04:41:39 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:25.716 04:41:39 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:25.716 04:41:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:25.716 04:41:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:25.716 04:41:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:25.716 04:41:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:25.716 04:41:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:25.716 04:41:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:25.716 04:41:39 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:25.716 04:41:39 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:25.716 04:41:39 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:25.716 04:41:39 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:25.716 04:41:39 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:25.716 04:41:39 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:25.716 04:41:39 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:25.716 04:41:39 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:25.716 04:41:39 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:25.716 04:41:39 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:25.716 04:41:39 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:25.716 04:41:39 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:25.716 04:41:39 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:25.716 04:41:39 -- common/autotest_common.sh@1542 -- # continue 00:04:25.716 04:41:39 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:25.716 04:41:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:25.716 04:41:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.716 04:41:39 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:25.716 04:41:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:25.716 04:41:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.716 04:41:39 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:26.235 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.235 04:41:40 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:26.235 04:41:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:26.235 04:41:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.235 04:41:40 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:26.235 04:41:40 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:26.235 04:41:40 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:26.235 04:41:40 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:26.235 04:41:40 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:26.235 04:41:40 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:26.235 04:41:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:26.235 04:41:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:26.235 04:41:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.235 04:41:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:26.235 04:41:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:26.497 04:41:40 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:26.497 04:41:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:26.497 04:41:40 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:26.497 04:41:40 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:26.497 04:41:40 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:26.497 04:41:40 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:26.497 04:41:40 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:26.497 04:41:40 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:26.497 04:41:40 -- common/autotest_common.sh@1578 -- # return 0 00:04:26.497 04:41:40 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:04:26.497 04:41:40 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:26.497 04:41:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.497 04:41:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.497 04:41:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.497 ************************************ 00:04:26.497 START TEST unittest 00:04:26.497 ************************************ 00:04:26.497 04:41:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:26.497 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:26.497 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:26.497 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:26.497 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:26.497 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:26.497 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:26.497 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:26.497 ++ rpc_py=rpc_cmd 00:04:26.497 ++ set -e 00:04:26.497 ++ shopt -s nullglob 00:04:26.497 ++ shopt -s extglob 00:04:26.497 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:26.497 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:26.497 +++ CONFIG_RDMA=y 00:04:26.497 +++ CONFIG_UNIT_TESTS=y 00:04:26.497 +++ CONFIG_GOLANG=n 00:04:26.497 +++ CONFIG_FUSE=n 00:04:26.497 +++ CONFIG_ISAL=n 00:04:26.497 +++ CONFIG_VTUNE_DIR= 00:04:26.497 +++ CONFIG_CUSTOMOCF=n 00:04:26.497 +++ CONFIG_IPSEC_MB_DIR= 00:04:26.497 +++ CONFIG_VBDEV_COMPRESS=n 00:04:26.497 +++ CONFIG_OCF_PATH= 00:04:26.497 +++ CONFIG_SHARED=n 00:04:26.497 +++ CONFIG_DPDK_LIB_DIR= 00:04:26.497 +++ CONFIG_TESTS=y 00:04:26.497 +++ CONFIG_APPS=y 00:04:26.497 +++ CONFIG_ISAL_CRYPTO=n 00:04:26.497 +++ CONFIG_LIBDIR= 00:04:26.497 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:26.497 +++ CONFIG_DAOS_DIR= 00:04:26.497 +++ CONFIG_ISCSI_INITIATOR=n 00:04:26.497 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:26.497 +++ CONFIG_ASAN=y 00:04:26.497 +++ CONFIG_LTO=n 00:04:26.497 +++ CONFIG_CET=n 00:04:26.497 +++ CONFIG_FUZZER=n 00:04:26.497 +++ CONFIG_USDT=n 00:04:26.497 +++ CONFIG_VTUNE=n 00:04:26.497 +++ CONFIG_VHOST=y 00:04:26.497 +++ CONFIG_WPDK_DIR= 00:04:26.497 +++ CONFIG_UBLK=n 00:04:26.497 +++ CONFIG_URING=n 00:04:26.497 +++ CONFIG_SMA=n 00:04:26.497 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:26.497 +++ CONFIG_IDXD_KERNEL=n 00:04:26.497 +++ CONFIG_FC_PATH= 00:04:26.497 +++ CONFIG_PREFIX=/usr/local 00:04:26.497 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:04:26.497 +++ CONFIG_XNVME=n 00:04:26.497 +++ CONFIG_RDMA_PROV=verbs 00:04:26.497 +++ CONFIG_RDMA_SET_TOS=y 00:04:26.497 +++ CONFIG_FUZZER_LIB= 00:04:26.497 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:26.497 +++ CONFIG_ARCH=native 00:04:26.497 +++ CONFIG_PGO_CAPTURE=n 00:04:26.497 +++ CONFIG_DAOS=y 00:04:26.497 +++ CONFIG_WERROR=y 00:04:26.497 +++ CONFIG_DEBUG=y 00:04:26.497 +++ CONFIG_AVAHI=n 00:04:26.497 +++ CONFIG_CROSS_PREFIX= 00:04:26.497 +++ CONFIG_PGO_USE=n 00:04:26.497 +++ CONFIG_CRYPTO=n 00:04:26.497 +++ CONFIG_HAVE_ARC4RANDOM=n 00:04:26.497 +++ CONFIG_OPENSSL_PATH= 00:04:26.497 +++ CONFIG_EXAMPLES=y 00:04:26.497 +++ CONFIG_DPDK_INC_DIR= 00:04:26.497 +++ CONFIG_MAX_LCORES= 00:04:26.497 +++ CONFIG_VIRTIO=y 00:04:26.497 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:26.497 +++ CONFIG_IPSEC_MB=n 00:04:26.497 +++ CONFIG_UBSAN=n 00:04:26.497 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:26.497 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:26.497 +++ CONFIG_HAVE_LIBBSD=n 00:04:26.497 +++ CONFIG_URING_PATH= 00:04:26.497 +++ CONFIG_NVME_CUSE=y 00:04:26.497 +++ CONFIG_URING_ZNS=n 00:04:26.497 +++ CONFIG_VFIO_USER=n 00:04:26.497 +++ CONFIG_FC=n 00:04:26.497 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:04:26.497 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:26.497 +++ CONFIG_RBD=n 00:04:26.497 +++ CONFIG_RAID5F=n 00:04:26.497 +++ CONFIG_VFIO_USER_DIR= 00:04:26.497 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:26.497 +++ CONFIG_TSAN=n 00:04:26.497 +++ CONFIG_IDXD=y 00:04:26.497 +++ CONFIG_OCF=n 00:04:26.497 +++ CONFIG_CRYPTO_MLX5=n 00:04:26.497 +++ CONFIG_FIO_PLUGIN=y 00:04:26.497 +++ CONFIG_COVERAGE=y 00:04:26.497 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:26.497 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:26.497 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:26.497 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:26.497 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:26.497 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:26.497 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:26.497 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:26.497 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:26.497 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:26.497 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:26.498 +++ VHOST_APP=("$_app_dir/vhost") 00:04:26.498 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:26.498 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:26.498 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:26.498 +++ [[ #ifndef SPDK_CONFIG_H 00:04:26.498 #define SPDK_CONFIG_H 00:04:26.498 #define SPDK_CONFIG_APPS 1 00:04:26.498 #define SPDK_CONFIG_ARCH native 00:04:26.498 #define SPDK_CONFIG_ASAN 1 00:04:26.498 #undef SPDK_CONFIG_AVAHI 00:04:26.498 #undef SPDK_CONFIG_CET 00:04:26.498 #define SPDK_CONFIG_COVERAGE 1 00:04:26.498 #define SPDK_CONFIG_CROSS_PREFIX 00:04:26.498 #undef SPDK_CONFIG_CRYPTO 00:04:26.498 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:26.498 #undef SPDK_CONFIG_CUSTOMOCF 00:04:26.498 #define SPDK_CONFIG_DAOS 1 00:04:26.498 #define SPDK_CONFIG_DAOS_DIR 00:04:26.498 #define SPDK_CONFIG_DEBUG 1 00:04:26.498 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:26.498 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:26.498 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:26.498 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:26.498 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:26.498 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:26.498 #define SPDK_CONFIG_EXAMPLES 1 00:04:26.498 #undef SPDK_CONFIG_FC 00:04:26.498 #define SPDK_CONFIG_FC_PATH 00:04:26.498 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:26.498 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:26.498 #undef SPDK_CONFIG_FUSE 00:04:26.498 #undef SPDK_CONFIG_FUZZER 00:04:26.498 #define SPDK_CONFIG_FUZZER_LIB 00:04:26.498 #undef SPDK_CONFIG_GOLANG 00:04:26.498 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:04:26.498 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:26.498 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:26.498 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:26.498 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:04:26.498 #define SPDK_CONFIG_IDXD 1 00:04:26.498 #undef SPDK_CONFIG_IDXD_KERNEL 00:04:26.498 #undef SPDK_CONFIG_IPSEC_MB 00:04:26.498 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:26.498 #undef SPDK_CONFIG_ISAL 00:04:26.498 #undef SPDK_CONFIG_ISAL_CRYPTO 00:04:26.498 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:04:26.498 #define SPDK_CONFIG_LIBDIR 00:04:26.498 #undef SPDK_CONFIG_LTO 00:04:26.498 #define SPDK_CONFIG_MAX_LCORES 00:04:26.498 #define SPDK_CONFIG_NVME_CUSE 1 00:04:26.498 #undef SPDK_CONFIG_OCF 00:04:26.498 #define SPDK_CONFIG_OCF_PATH 00:04:26.498 #define SPDK_CONFIG_OPENSSL_PATH 00:04:26.498 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:26.498 #undef SPDK_CONFIG_PGO_USE 00:04:26.498 #define SPDK_CONFIG_PREFIX /usr/local 00:04:26.498 #undef SPDK_CONFIG_RAID5F 00:04:26.498 #undef SPDK_CONFIG_RBD 00:04:26.498 #define SPDK_CONFIG_RDMA 1 00:04:26.498 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:26.498 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:26.498 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:04:26.498 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:26.498 #undef SPDK_CONFIG_SHARED 00:04:26.498 #undef SPDK_CONFIG_SMA 00:04:26.498 #define SPDK_CONFIG_TESTS 1 00:04:26.498 #undef SPDK_CONFIG_TSAN 00:04:26.498 #undef SPDK_CONFIG_UBLK 00:04:26.498 #undef SPDK_CONFIG_UBSAN 00:04:26.498 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:26.498 #undef SPDK_CONFIG_URING 00:04:26.498 #define SPDK_CONFIG_URING_PATH 00:04:26.498 #undef SPDK_CONFIG_URING_ZNS 00:04:26.498 #undef SPDK_CONFIG_USDT 00:04:26.498 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:26.498 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:26.498 #undef SPDK_CONFIG_VFIO_USER 00:04:26.498 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:26.498 #define SPDK_CONFIG_VHOST 1 00:04:26.498 #define SPDK_CONFIG_VIRTIO 1 00:04:26.498 #undef SPDK_CONFIG_VTUNE 00:04:26.498 #define SPDK_CONFIG_VTUNE_DIR 00:04:26.498 #define SPDK_CONFIG_WERROR 1 00:04:26.498 #define SPDK_CONFIG_WPDK_DIR 00:04:26.498 #undef SPDK_CONFIG_XNVME 00:04:26.498 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:26.498 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:26.498 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:26.498 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:26.498 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.498 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.498 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:26.498 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:26.498 ++++ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:26.498 ++++ export PATH 00:04:26.498 ++++ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:26.498 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:26.498 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:26.498 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:26.498 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:26.498 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:26.498 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:26.498 +++ TEST_TAG=N/A 00:04:26.498 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:26.498 ++ : 1 00:04:26.498 ++ export RUN_NIGHTLY 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_RUN_VALGRIND 00:04:26.498 ++ : 1 00:04:26.498 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:26.498 ++ : 1 00:04:26.498 ++ export SPDK_TEST_UNITTEST 00:04:26.498 ++ : 00:04:26.498 ++ export SPDK_TEST_AUTOBUILD 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_RELEASE_BUILD 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_ISAL 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_ISCSI 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_NVME 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_NVME_PMR 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_NVME_BP 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_NVME_CLI 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_NVME_CUSE 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_NVME_FDP 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_NVMF 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_VFIOUSER 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_FUZZER 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_FUZZER_SHORT 00:04:26.498 ++ : rdma 00:04:26.498 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_RBD 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_VHOST 00:04:26.498 ++ : 1 00:04:26.498 ++ export SPDK_TEST_BLOCKDEV 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_IOAT 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_BLOBFS 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_VHOST_INIT 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_LVOL 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:26.498 ++ : 1 00:04:26.498 ++ export SPDK_RUN_ASAN 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_RUN_UBSAN 00:04:26.498 ++ : 00:04:26.498 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_RUN_NON_ROOT 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_CRYPTO 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_FTL 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_OCF 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_VMD 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_OPAL 00:04:26.498 ++ : 00:04:26.498 ++ export SPDK_TEST_NATIVE_DPDK 00:04:26.498 ++ : true 00:04:26.498 ++ export SPDK_AUTOTEST_X 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_RAID5 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_URING 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_USDT 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_USE_IGB_UIO 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_SCHEDULER 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_SCANBUILD 00:04:26.498 ++ : 00:04:26.498 ++ export SPDK_TEST_NVMF_NICS 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_SMA 00:04:26.498 ++ : 1 00:04:26.498 ++ export SPDK_TEST_DAOS 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_XNVME 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_ACCEL_DSA 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_ACCEL_IAA 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_ACCEL_IOAT 00:04:26.498 ++ : 00:04:26.498 ++ export SPDK_TEST_FUZZER_TARGET 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_TEST_NVMF_MDNS 00:04:26.498 ++ : 0 00:04:26.498 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:26.498 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:26.498 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:26.498 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:26.498 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:26.498 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:26.498 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:26.498 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:26.498 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:26.498 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:26.498 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:26.498 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:26.499 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:26.499 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:26.499 ++ PYTHONDONTWRITEBYTECODE=1 00:04:26.499 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:26.499 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:26.499 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:26.499 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:26.499 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:26.499 ++ rm -rf /var/tmp/asan_suppression_file 00:04:26.499 ++ cat 00:04:26.499 ++ echo leak:libfuse3.so 00:04:26.499 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:26.499 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:26.499 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:26.499 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:26.499 ++ '[' -z /var/spdk/dependencies ']' 00:04:26.499 ++ export DEPENDENCY_DIR 00:04:26.499 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:26.499 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:26.499 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:26.499 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:26.499 ++ export QEMU_BIN= 00:04:26.499 ++ QEMU_BIN= 00:04:26.499 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:26.499 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:26.499 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:26.499 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:26.499 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:26.499 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:26.499 ++ '[' 0 -eq 0 ']' 00:04:26.499 ++ export valgrind= 00:04:26.499 ++ valgrind= 00:04:26.499 +++ uname -s 00:04:26.499 ++ '[' Linux = Linux ']' 00:04:26.499 ++ HUGEMEM=4096 00:04:26.499 ++ export CLEAR_HUGE=yes 00:04:26.499 ++ CLEAR_HUGE=yes 00:04:26.499 ++ [[ 0 -eq 1 ]] 00:04:26.499 ++ [[ 0 -eq 1 ]] 00:04:26.499 ++ MAKE=make 00:04:26.499 +++ nproc 00:04:26.499 ++ MAKEFLAGS=-j10 00:04:26.499 ++ export HUGEMEM=4096 00:04:26.499 ++ HUGEMEM=4096 00:04:26.499 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:26.499 ++ NO_HUGE=() 00:04:26.499 ++ TEST_MODE= 00:04:26.499 ++ [[ -z '' ]] 00:04:26.499 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:26.499 ++ exec 00:04:26.499 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:26.499 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:26.499 ++ set_test_storage 2147483648 00:04:26.499 ++ [[ -v testdir ]] 00:04:26.499 ++ local requested_size=2147483648 00:04:26.499 ++ local mount target_dir 00:04:26.499 ++ local -A mounts fss sizes avails uses 00:04:26.499 ++ local source fs size avail mount use 00:04:26.499 ++ local storage_fallback storage_candidates 00:04:26.499 +++ mktemp -udt spdk.XXXXXX 00:04:26.499 ++ storage_fallback=/tmp/spdk.D8dFVG 00:04:26.499 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:26.499 ++ [[ -n '' ]] 00:04:26.499 ++ [[ -n '' ]] 00:04:26.499 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.D8dFVG/tests/unit /tmp/spdk.D8dFVG 00:04:26.499 ++ requested_size=2214592512 00:04:26.499 ++ read -r source fs size use avail _ mount 00:04:26.499 +++ grep -v Filesystem 00:04:26.499 +++ df -T 00:04:26.499 ++ mounts["$mount"]=devtmpfs 00:04:26.499 ++ fss["$mount"]=devtmpfs 00:04:26.499 ++ avails["$mount"]=6267633664 00:04:26.499 ++ sizes["$mount"]=6267633664 00:04:26.499 ++ uses["$mount"]=0 00:04:26.499 ++ read -r source fs size use avail _ mount 00:04:26.499 ++ mounts["$mount"]=tmpfs 00:04:26.499 ++ fss["$mount"]=tmpfs 00:04:26.499 ++ avails["$mount"]=6298181632 00:04:26.499 ++ sizes["$mount"]=6298181632 00:04:26.499 ++ uses["$mount"]=0 00:04:26.499 ++ read -r source fs size use avail _ mount 00:04:26.499 ++ mounts["$mount"]=tmpfs 00:04:26.499 ++ fss["$mount"]=tmpfs 00:04:26.499 ++ avails["$mount"]=6280880128 00:04:26.499 ++ sizes["$mount"]=6298181632 00:04:26.499 ++ uses["$mount"]=17301504 00:04:26.499 ++ read -r source fs size use avail _ mount 00:04:26.499 ++ mounts["$mount"]=tmpfs 00:04:26.499 ++ fss["$mount"]=tmpfs 00:04:26.499 ++ avails["$mount"]=6298181632 00:04:26.499 ++ sizes["$mount"]=6298181632 00:04:26.499 ++ uses["$mount"]=0 00:04:26.499 ++ read -r source fs size use avail _ mount 00:04:26.499 ++ mounts["$mount"]=/dev/vda1 00:04:26.499 ++ fss["$mount"]=xfs 00:04:26.499 ++ avails["$mount"]=14369361920 00:04:26.499 ++ sizes["$mount"]=21463302144 00:04:26.499 ++ uses["$mount"]=7093940224 00:04:26.499 ++ read -r source fs size use avail _ mount 00:04:26.499 ++ mounts["$mount"]=tmpfs 00:04:26.499 ++ fss["$mount"]=tmpfs 00:04:26.499 ++ avails["$mount"]=1259638784 00:04:26.499 ++ sizes["$mount"]=1259638784 00:04:26.499 ++ uses["$mount"]=0 00:04:26.499 ++ read -r source fs size use avail _ mount 00:04:26.499 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:04:26.499 ++ fss["$mount"]=fuse.sshfs 00:04:26.499 ++ avails["$mount"]=96604479488 00:04:26.499 ++ sizes["$mount"]=105088212992 00:04:26.499 ++ uses["$mount"]=3098300416 00:04:26.499 ++ read -r source fs size use avail _ mount 00:04:26.499 ++ printf '* Looking for test storage...\n' 00:04:26.499 * Looking for test storage... 00:04:26.499 ++ local target_space new_size 00:04:26.499 ++ for target_dir in "${storage_candidates[@]}" 00:04:26.499 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:26.499 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:26.499 ++ mount=/ 00:04:26.499 ++ target_space=14369361920 00:04:26.499 ++ (( target_space == 0 || target_space < requested_size )) 00:04:26.499 ++ (( target_space >= requested_size )) 00:04:26.499 ++ [[ xfs == tmpfs ]] 00:04:26.499 ++ [[ xfs == ramfs ]] 00:04:26.499 ++ [[ / == / ]] 00:04:26.499 ++ new_size=9308532736 00:04:26.499 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:26.499 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:26.499 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:26.499 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:26.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:26.499 ++ return 0 00:04:26.499 ++ set -o errtrace 00:04:26.499 ++ shopt -s extdebug 00:04:26.499 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:26.499 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:26.499 04:41:40 -- common/autotest_common.sh@1672 -- # true 00:04:26.499 04:41:40 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:04:26.499 04:41:40 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:26.499 04:41:40 -- common/autotest_common.sh@29 -- # exec 00:04:26.499 04:41:40 -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:26.499 04:41:40 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:26.499 04:41:40 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:26.499 04:41:40 -- common/autotest_common.sh@18 -- # set -x 00:04:26.499 04:41:40 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:26.499 04:41:40 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:04:26.499 04:41:40 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:04:26.499 04:41:40 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:04:26.499 04:41:40 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:04:26.499 04:41:40 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:04:26.499 04:41:40 -- unit/unittest.sh@179 -- # hash lcov 00:04:26.499 04:41:40 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:26.499 04:41:40 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:26.499 04:41:40 -- unit/unittest.sh@180 -- # cov_avail=yes 00:04:26.499 04:41:40 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:04:26.499 04:41:40 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:04:26.499 04:41:40 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:26.499 04:41:40 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:26.499 04:41:40 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:04:26.499 --rc lcov_branch_coverage=1 00:04:26.499 --rc lcov_function_coverage=1 00:04:26.499 --rc genhtml_branch_coverage=1 00:04:26.499 --rc genhtml_function_coverage=1 00:04:26.499 --rc genhtml_legend=1 00:04:26.499 --rc geninfo_all_blocks=1 00:04:26.499 ' 00:04:26.499 04:41:40 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:04:26.499 --rc lcov_branch_coverage=1 00:04:26.499 --rc lcov_function_coverage=1 00:04:26.499 --rc genhtml_branch_coverage=1 00:04:26.499 --rc genhtml_function_coverage=1 00:04:26.499 --rc genhtml_legend=1 00:04:26.499 --rc geninfo_all_blocks=1 00:04:26.499 ' 00:04:26.499 04:41:40 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:04:26.499 --rc lcov_branch_coverage=1 00:04:26.499 --rc lcov_function_coverage=1 00:04:26.499 --rc genhtml_branch_coverage=1 00:04:26.499 --rc genhtml_function_coverage=1 00:04:26.499 --rc genhtml_legend=1 00:04:26.499 --rc geninfo_all_blocks=1 00:04:26.499 --no-external' 00:04:26.499 04:41:40 -- unit/unittest.sh@200 -- # LCOV='lcov 00:04:26.499 --rc lcov_branch_coverage=1 00:04:26.499 --rc lcov_function_coverage=1 00:04:26.499 --rc genhtml_branch_coverage=1 00:04:26.499 --rc genhtml_function_coverage=1 00:04:26.499 --rc genhtml_legend=1 00:04:26.499 --rc geninfo_all_blocks=1 00:04:26.499 --no-external' 00:04:26.499 04:41:40 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:04:33.087 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:33.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:33.087 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:33.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:33.087 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:33.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:47.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:47.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:47.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:47.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:47.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:47.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:47.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:47.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:47.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:47.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:26.712 04:42:34 -- unit/unittest.sh@206 -- # uname -m 00:05:26.712 04:42:34 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:26.712 04:42:34 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:26.712 04:42:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.712 04:42:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.712 04:42:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.712 ************************************ 00:05:26.712 START TEST unittest_pci_event 00:05:26.712 ************************************ 00:05:26.712 04:42:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:26.712 00:05:26.712 00:05:26.712 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.712 http://cunit.sourceforge.net/ 00:05:26.712 00:05:26.712 00:05:26.712 Suite: pci_event 00:05:26.712 Test: test_pci_parse_event ...passed 00:05:26.712 00:05:26.712 [2024-05-15 04:42:34.864442] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:26.712 [2024-05-15 04:42:34.864772] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:26.712 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.712 suites 1 1 n/a 0 0 00:05:26.712 tests 1 1 1 0 0 00:05:26.712 asserts 15 15 15 0 n/a 00:05:26.712 00:05:26.712 Elapsed time = 0.000 seconds 00:05:26.712 ************************************ 00:05:26.712 END TEST unittest_pci_event 00:05:26.712 ************************************ 00:05:26.712 00:05:26.712 real 0m0.037s 00:05:26.712 user 0m0.016s 00:05:26.712 sys 0m0.019s 00:05:26.712 04:42:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.712 04:42:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.712 04:42:34 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:26.712 04:42:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.712 04:42:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.712 04:42:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.712 ************************************ 00:05:26.712 START TEST unittest_include 00:05:26.712 ************************************ 00:05:26.712 04:42:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:26.712 00:05:26.712 00:05:26.712 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.712 http://cunit.sourceforge.net/ 00:05:26.712 00:05:26.712 00:05:26.712 Suite: histogram 00:05:26.712 Test: histogram_test ...passed 00:05:26.712 Test: histogram_merge ...passed 00:05:26.712 00:05:26.712 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.712 suites 1 1 n/a 0 0 00:05:26.712 tests 2 2 2 0 0 00:05:26.712 asserts 50 50 50 0 n/a 00:05:26.712 00:05:26.712 Elapsed time = 0.000 seconds 00:05:26.712 ************************************ 00:05:26.712 END TEST unittest_include 00:05:26.712 ************************************ 00:05:26.712 00:05:26.712 real 0m0.031s 00:05:26.712 user 0m0.013s 00:05:26.712 sys 0m0.018s 00:05:26.712 04:42:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.712 04:42:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.712 04:42:34 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:26.712 04:42:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.712 04:42:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.712 04:42:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.712 ************************************ 00:05:26.712 START TEST unittest_bdev 00:05:26.712 ************************************ 00:05:26.712 04:42:34 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:05:26.712 04:42:34 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:26.712 00:05:26.712 00:05:26.712 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.712 http://cunit.sourceforge.net/ 00:05:26.712 00:05:26.712 00:05:26.712 Suite: bdev 00:05:26.712 Test: bytes_to_blocks_test ...passed 00:05:26.712 Test: num_blocks_test ...passed 00:05:26.712 Test: io_valid_test ...passed 00:05:26.712 Test: open_write_test ...[2024-05-15 04:42:35.108579] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:26.712 [2024-05-15 04:42:35.109221] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:26.712 [2024-05-15 04:42:35.109396] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:26.712 passed 00:05:26.712 Test: claim_test ...passed 00:05:26.712 Test: alias_add_del_test ...[2024-05-15 04:42:35.240551] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:26.712 [2024-05-15 04:42:35.240702] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4578:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:26.712 [2024-05-15 04:42:35.240933] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:26.712 passed 00:05:26.712 Test: get_device_stat_test ...passed 00:05:26.713 Test: bdev_io_types_test ...passed 00:05:26.713 Test: bdev_io_wait_test ...passed 00:05:26.713 Test: bdev_io_spans_split_test ...passed 00:05:26.713 Test: bdev_io_boundary_split_test ...passed 00:05:26.713 Test: bdev_io_max_size_and_segment_split_test ...[2024-05-15 04:42:35.495778] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:26.713 passed 00:05:26.713 Test: bdev_io_mix_split_test ...passed 00:05:26.713 Test: bdev_io_split_with_io_wait ...passed 00:05:26.713 Test: bdev_io_write_unit_split_test ...[2024-05-15 04:42:35.688240] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:26.713 [2024-05-15 04:42:35.688341] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:26.713 [2024-05-15 04:42:35.688367] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:26.713 [2024-05-15 04:42:35.688400] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:26.713 passed 00:05:26.713 Test: bdev_io_alignment_with_boundary ...passed 00:05:26.713 Test: bdev_io_alignment ...passed 00:05:26.713 Test: bdev_histograms ...passed 00:05:26.713 Test: bdev_write_zeroes ...passed 00:05:26.713 Test: bdev_compare_and_write ...passed 00:05:26.713 Test: bdev_compare ...passed 00:05:26.713 Test: bdev_compare_emulated ...passed 00:05:26.713 Test: bdev_zcopy_write ...passed 00:05:26.713 Test: bdev_zcopy_read ...passed 00:05:26.713 Test: bdev_open_while_hotremove ...passed 00:05:26.713 Test: bdev_close_while_hotremove ...passed 00:05:26.713 Test: bdev_open_ext_test ...passed 00:05:26.713 Test: bdev_open_ext_unregister ...passed 00:05:26.713 Test: bdev_set_io_timeout ...[2024-05-15 04:42:36.309218] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:26.713 [2024-05-15 04:42:36.309377] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:26.713 passed 00:05:26.713 Test: bdev_set_qd_sampling ...passed 00:05:26.713 Test: lba_range_overlap ...passed 00:05:26.713 Test: lock_lba_range_check_ranges ...passed 00:05:26.713 Test: lock_lba_range_with_io_outstanding ...passed 00:05:26.713 Test: lock_lba_range_overlapped ...passed 00:05:26.713 Test: bdev_quiesce ...[2024-05-15 04:42:36.566415] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9964:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:26.713 passed 00:05:26.713 Test: bdev_io_abort ...passed 00:05:26.713 Test: bdev_unmap ...passed 00:05:26.713 Test: bdev_write_zeroes_split_test ...passed 00:05:26.713 Test: bdev_set_options_test ...[2024-05-15 04:42:36.729647] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:26.713 passed 00:05:26.713 Test: bdev_get_memory_domains ...passed 00:05:26.713 Test: bdev_io_ext ...passed 00:05:26.713 Test: bdev_io_ext_no_opts ...passed 00:05:26.713 Test: bdev_io_ext_invalid_opts ...passed 00:05:26.713 Test: bdev_io_ext_split ...passed 00:05:26.713 Test: bdev_io_ext_bounce_buffer ...passed 00:05:26.713 Test: bdev_register_uuid_alias ...[2024-05-15 04:42:36.993002] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name cab0d8e8-24b3-45fd-b641-5b9c40b9d20b already exists 00:05:26.713 [2024-05-15 04:42:36.993073] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:cab0d8e8-24b3-45fd-b641-5b9c40b9d20b alias for bdev bdev0 00:05:26.713 passed 00:05:26.713 Test: bdev_unregister_by_name ...passed 00:05:26.713 Test: for_each_bdev_test ...passed 00:05:26.713 Test: bdev_seek_test ...[2024-05-15 04:42:37.017001] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7831:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:26.713 [2024-05-15 04:42:37.017065] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7839:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:26.713 passed 00:05:26.713 Test: bdev_copy ...passed 00:05:26.713 Test: bdev_copy_split_test ...passed 00:05:26.713 Test: examine_locks ...passed 00:05:26.713 Test: claim_v2_rwo ...passed 00:05:26.713 Test: claim_v2_rom ...passed 00:05:26.713 Test: claim_v2_rwm ...passed 00:05:26.713 Test: claim_v2_existing_writer ...passed 00:05:26.713 Test: claim_v2_existing_v1 ...passed 00:05:26.713 Test: claim_v1_existing_v2 ...passed 00:05:26.713 Test: examine_claimed ...passed 00:05:26.713 00:05:26.713 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.713 suites 1 1 n/a 0 0 00:05:26.713 tests 59 59 59 0 0 00:05:26.713 asserts 4599 4599 4599 0 n/a 00:05:26.713 00:05:26.713 Elapsed time = 2.140 seconds 00:05:26.713 [2024-05-15 04:42:37.160200] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160268] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160285] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160341] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160358] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160397] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8560:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:26.713 [2024-05-15 04:42:37.160497] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160544] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160566] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160589] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160627] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:26.713 [2024-05-15 04:42:37.160663] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:26.713 [2024-05-15 04:42:37.160756] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8633:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:26.713 [2024-05-15 04:42:37.160801] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160822] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160846] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160865] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160890] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8653:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.160919] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8633:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:26.713 [2024-05-15 04:42:37.161017] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:26.713 [2024-05-15 04:42:37.161046] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:26.713 [2024-05-15 04:42:37.161128] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.161154] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.161173] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.161257] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.161295] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.161322] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:26.713 [2024-05-15 04:42:37.161525] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:26.713 04:42:37 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:26.713 00:05:26.713 00:05:26.713 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.713 http://cunit.sourceforge.net/ 00:05:26.713 00:05:26.713 00:05:26.713 Suite: nvme 00:05:26.713 Test: test_create_ctrlr ...passed 00:05:26.713 Test: test_reset_ctrlr ...passed 00:05:26.713 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:26.713 Test: test_failover_ctrlr ...[2024-05-15 04:42:37.210523] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.713 passed 00:05:26.713 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:05:26.713 Test: test_pending_reset ...[2024-05-15 04:42:37.212586] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.713 [2024-05-15 04:42:37.212870] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.713 [2024-05-15 04:42:37.213051] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.713 passed 00:05:26.713 Test: test_attach_ctrlr ...passed 00:05:26.713 Test: test_aer_cb ...[2024-05-15 04:42:37.214815] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.713 [2024-05-15 04:42:37.215057] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.713 [2024-05-15 04:42:37.215960] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:26.713 passed 00:05:26.713 Test: test_submit_nvme_cmd ...passed 00:05:26.713 Test: test_add_remove_trid ...passed 00:05:26.713 Test: test_abort ...[2024-05-15 04:42:37.219167] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7221:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:26.713 passed 00:05:26.713 Test: test_get_io_qpair ...passed 00:05:26.714 Test: test_bdev_unregister ...passed 00:05:26.714 Test: test_compare_ns ...passed 00:05:26.714 Test: test_init_ana_log_page ...passed 00:05:26.714 Test: test_get_memory_domains ...passed 00:05:26.714 Test: test_reconnect_qpair ...passed 00:05:26.714 Test: test_create_bdev_ctrlr ...passed 00:05:26.714 Test: test_add_multi_ns_to_bdev ...passed 00:05:26.714 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:26.714 Test: test_admin_path ...passed 00:05:26.714 Test: test_reset_bdev_ctrlr ...[2024-05-15 04:42:37.221265] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.221468] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5273:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:26.714 [2024-05-15 04:42:37.221911] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4486:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:26.714 passed 00:05:26.714 Test: test_find_io_path ...passed 00:05:26.714 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:26.714 Test: test_retry_io_for_io_path_error ...passed 00:05:26.714 Test: test_retry_io_count ...passed 00:05:26.714 Test: test_concurrent_read_ana_log_page ...passed 00:05:26.714 Test: test_retry_io_for_ana_error ...passed 00:05:26.714 Test: test_check_io_error_resiliency_params ...passed 00:05:26.714 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:05:26.714 Test: test_reconnect_ctrlr ...passed 00:05:26.714 Test: test_retry_failover_ctrlr ...passed 00:05:26.714 Test: test_fail_path ...passed 00:05:26.714 Test: test_nvme_ns_cmp ...passed 00:05:26.714 Test: test_ana_transition ...passed 00:05:26.714 Test: test_set_preferred_path ...[2024-05-15 04:42:37.224149] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5926:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:26.714 [2024-05-15 04:42:37.224207] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5930:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:26.714 [2024-05-15 04:42:37.224228] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5939:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:26.714 [2024-05-15 04:42:37.224267] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5942:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:26.714 [2024-05-15 04:42:37.224285] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:26.714 [2024-05-15 04:42:37.224309] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:26.714 [2024-05-15 04:42:37.224327] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5934:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:26.714 [2024-05-15 04:42:37.224360] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5949:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:26.714 [2024-05-15 04:42:37.224388] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5946:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:26.714 [2024-05-15 04:42:37.224655] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.224729] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.224813] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.224878] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.224930] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.225073] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.225281] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.225352] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.225398] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.225435] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.225496] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 passed 00:05:26.714 Test: test_find_next_io_path ...passed 00:05:26.714 Test: test_find_io_path_min_qd ...passed 00:05:26.714 Test: test_disable_auto_failback ...passed 00:05:26.714 Test: test_set_multipath_policy ...passed 00:05:26.714 Test: test_uuid_generation ...[2024-05-15 04:42:37.226382] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 passed 00:05:26.714 Test: test_retry_io_to_same_path ...passed 00:05:26.714 Test: test_race_between_reset_and_disconnected ...passed 00:05:26.714 Test: test_ctrlr_op_rpc ...passed 00:05:26.714 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:26.714 Test: test_disable_enable_ctrlr ...passed 00:05:26.714 Test: test_delete_ctrlr_done ...passed 00:05:26.714 Test: test_ns_remove_during_reset ...passed 00:05:26.714 00:05:26.714 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.714 suites 1 1 n/a 0 0 00:05:26.714 tests 48 48 48 0 0 00:05:26.714 asserts 3553 3553 3553 0 n/a 00:05:26.714 00:05:26.714 Elapsed time = 0.030 seconds 00:05:26.714 [2024-05-15 04:42:37.227794] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 [2024-05-15 04:42:37.227861] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:26.714 04:42:37 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:26.714 Test Options 00:05:26.714 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:26.714 00:05:26.714 00:05:26.714 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.714 http://cunit.sourceforge.net/ 00:05:26.714 00:05:26.714 00:05:26.714 Suite: raid 00:05:26.714 Test: test_create_raid ...passed 00:05:26.714 Test: test_create_raid_superblock ...passed 00:05:26.714 Test: test_delete_raid ...passed 00:05:26.714 Test: test_create_raid_invalid_args ...[2024-05-15 04:42:37.261430] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:26.714 [2024-05-15 04:42:37.261782] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:26.714 [2024-05-15 04:42:37.262031] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:26.714 [2024-05-15 04:42:37.262209] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:26.714 [2024-05-15 04:42:37.263101] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:26.714 passed 00:05:26.714 Test: test_delete_raid_invalid_args ...passed 00:05:26.714 Test: test_io_channel ...passed 00:05:26.714 Test: test_reset_io ...passed 00:05:26.714 Test: test_write_io ...passed 00:05:26.714 Test: test_read_io ...passed 00:05:26.714 Test: test_unmap_io ...passed 00:05:26.714 Test: test_io_failure ...[2024-05-15 04:42:38.360600] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:26.714 passed 00:05:26.714 Test: test_multi_raid_no_io ...passed 00:05:26.714 Test: test_multi_raid_with_io ...passed 00:05:26.714 Test: test_io_type_supported ...passed 00:05:26.714 Test: test_raid_json_dump_info ...passed 00:05:26.714 Test: test_context_size ...passed 00:05:26.714 Test: test_raid_level_conversions ...passed 00:05:26.714 Test: test_raid_process ...passed 00:05:26.714 Test: test_raid_io_split ...passed 00:05:26.714 00:05:26.714 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.714 suites 1 1 n/a 0 0 00:05:26.714 tests 19 19 19 0 0 00:05:26.714 asserts 177879 177879 177879 0 n/a 00:05:26.714 00:05:26.714 Elapsed time = 1.110 seconds 00:05:26.714 04:42:38 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:26.714 00:05:26.714 00:05:26.714 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.714 http://cunit.sourceforge.net/ 00:05:26.714 00:05:26.714 00:05:26.714 Suite: raid_sb 00:05:26.714 Test: test_raid_bdev_write_superblock ...passed 00:05:26.714 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:26.714 Test: test_raid_bdev_parse_superblock ...[2024-05-15 04:42:38.414398] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:26.714 passed 00:05:26.714 00:05:26.714 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.714 suites 1 1 n/a 0 0 00:05:26.714 tests 3 3 3 0 0 00:05:26.714 asserts 32 32 32 0 n/a 00:05:26.714 00:05:26.714 Elapsed time = 0.000 seconds 00:05:26.714 04:42:38 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:26.714 00:05:26.714 00:05:26.714 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.714 http://cunit.sourceforge.net/ 00:05:26.714 00:05:26.714 00:05:26.714 Suite: concat 00:05:26.714 Test: test_concat_start ...passed 00:05:26.714 Test: test_concat_rw ...passed 00:05:26.714 Test: test_concat_null_payload ...passed 00:05:26.714 00:05:26.714 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.714 suites 1 1 n/a 0 0 00:05:26.714 tests 3 3 3 0 0 00:05:26.714 asserts 8097 8097 8097 0 n/a 00:05:26.714 00:05:26.714 Elapsed time = 0.010 seconds 00:05:26.714 04:42:38 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:26.714 00:05:26.714 00:05:26.714 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.714 http://cunit.sourceforge.net/ 00:05:26.714 00:05:26.714 00:05:26.714 Suite: raid1 00:05:26.715 Test: test_raid1_start ...passed 00:05:26.715 Test: test_raid1_read_balancing ...passed 00:05:26.715 00:05:26.715 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.715 suites 1 1 n/a 0 0 00:05:26.715 tests 2 2 2 0 0 00:05:26.715 asserts 2856 2856 2856 0 n/a 00:05:26.715 00:05:26.715 Elapsed time = 0.010 seconds 00:05:26.715 04:42:38 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:26.715 00:05:26.715 00:05:26.715 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.715 http://cunit.sourceforge.net/ 00:05:26.715 00:05:26.715 00:05:26.715 Suite: zone 00:05:26.715 Test: test_zone_get_operation ...passed 00:05:26.715 Test: test_bdev_zone_get_info ...passed 00:05:26.715 Test: test_bdev_zone_management ...passed 00:05:26.715 Test: test_bdev_zone_append ...passed 00:05:26.715 Test: test_bdev_zone_append_with_md ...passed 00:05:26.715 Test: test_bdev_zone_appendv ...passed 00:05:26.715 Test: test_bdev_zone_appendv_with_md ...passed 00:05:26.715 Test: test_bdev_io_get_append_location ...passed 00:05:26.715 00:05:26.715 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.715 suites 1 1 n/a 0 0 00:05:26.715 tests 8 8 8 0 0 00:05:26.715 asserts 94 94 94 0 n/a 00:05:26.715 00:05:26.715 Elapsed time = 0.000 seconds 00:05:26.715 04:42:38 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:26.715 00:05:26.715 00:05:26.715 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.715 http://cunit.sourceforge.net/ 00:05:26.715 00:05:26.715 00:05:26.715 Suite: gpt_parse 00:05:26.715 Test: test_parse_mbr_and_primary ...[2024-05-15 04:42:38.555070] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:26.715 [2024-05-15 04:42:38.555331] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:26.715 [2024-05-15 04:42:38.555400] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:26.715 [2024-05-15 04:42:38.555498] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:26.715 [2024-05-15 04:42:38.555556] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:26.715 [2024-05-15 04:42:38.555628] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:26.715 passed 00:05:26.715 Test: test_parse_secondary ...[2024-05-15 04:42:38.555932] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:26.715 [2024-05-15 04:42:38.555982] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:26.715 [2024-05-15 04:42:38.556014] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:26.715 [2024-05-15 04:42:38.556046] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:26.715 passed 00:05:26.715 Test: test_check_mbr ...[2024-05-15 04:42:38.556321] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:26.715 passed 00:05:26.715 Test: test_read_header ...[2024-05-15 04:42:38.556359] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:26.715 [2024-05-15 04:42:38.556398] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:26.715 [2024-05-15 04:42:38.556511] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:26.715 [2024-05-15 04:42:38.556598] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:26.715 [2024-05-15 04:42:38.556658] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:26.715 [2024-05-15 04:42:38.556692] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:26.715 passed 00:05:26.715 Test: test_read_partitions ...[2024-05-15 04:42:38.556748] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:26.715 [2024-05-15 04:42:38.556794] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:26.715 [2024-05-15 04:42:38.556857] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:26.715 [2024-05-15 04:42:38.556904] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:26.715 [2024-05-15 04:42:38.556934] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:26.715 [2024-05-15 04:42:38.557090] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:26.715 passed 00:05:26.715 00:05:26.715 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.715 suites 1 1 n/a 0 0 00:05:26.715 tests 5 5 5 0 0 00:05:26.715 asserts 33 33 33 0 n/a 00:05:26.715 00:05:26.715 Elapsed time = 0.010 seconds 00:05:26.715 04:42:38 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:26.715 00:05:26.715 00:05:26.715 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.715 http://cunit.sourceforge.net/ 00:05:26.715 00:05:26.715 00:05:26.715 Suite: bdev_part 00:05:26.715 Test: part_test ...[2024-05-15 04:42:38.589237] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:26.715 passed 00:05:26.715 Test: part_free_test ...passed 00:05:26.715 Test: part_get_io_channel_test ...passed 00:05:26.715 Test: part_construct_ext ...passed 00:05:26.715 00:05:26.715 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.715 suites 1 1 n/a 0 0 00:05:26.715 tests 4 4 4 0 0 00:05:26.715 asserts 48 48 48 0 n/a 00:05:26.715 00:05:26.715 Elapsed time = 0.060 seconds 00:05:26.715 04:42:38 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:26.715 00:05:26.715 00:05:26.715 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.715 http://cunit.sourceforge.net/ 00:05:26.715 00:05:26.715 00:05:26.715 Suite: scsi_nvme_suite 00:05:26.715 Test: scsi_nvme_translate_test ...passed 00:05:26.715 00:05:26.715 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.715 suites 1 1 n/a 0 0 00:05:26.715 tests 1 1 1 0 0 00:05:26.715 asserts 104 104 104 0 n/a 00:05:26.715 00:05:26.715 Elapsed time = 0.000 seconds 00:05:26.715 04:42:38 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:26.715 00:05:26.715 00:05:26.715 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.715 http://cunit.sourceforge.net/ 00:05:26.715 00:05:26.715 00:05:26.715 Suite: lvol 00:05:26.715 Test: ut_lvs_init ...[2024-05-15 04:42:38.721819] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:26.715 passed 00:05:26.715 Test: ut_lvol_init ...passed 00:05:26.715 Test: ut_lvol_snapshot ...passed 00:05:26.715 Test: ut_lvol_clone ...passed 00:05:26.715 Test: ut_lvs_destroy ...passed 00:05:26.715 Test: ut_lvs_unload ...passed 00:05:26.715 Test: ut_lvol_resize ...passed 00:05:26.715 Test: ut_lvol_set_read_only ...passed 00:05:26.715 Test: ut_lvol_hotremove ...passed 00:05:26.715 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:26.715 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:26.715 Test: ut_lvol_read_write ...passed 00:05:26.715 Test: ut_vbdev_lvol_submit_request ...passed 00:05:26.715 Test: ut_lvol_examine_config ...passed 00:05:26.715 Test: ut_lvol_examine_disk ...[2024-05-15 04:42:38.722215] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:26.715 [2024-05-15 04:42:38.723002] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:26.715 [2024-05-15 04:42:38.723462] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:26.715 passed 00:05:26.715 Test: ut_lvol_rename ...passed 00:05:26.715 Test: ut_bdev_finish ...passed 00:05:26.715 Test: ut_lvs_rename ...passed 00:05:26.715 Test: ut_lvol_seek ...passed 00:05:26.715 Test: ut_esnap_dev_create ...[2024-05-15 04:42:38.724240] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:26.715 [2024-05-15 04:42:38.724352] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:26.715 [2024-05-15 04:42:38.724776] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:26.715 passed 00:05:26.715 Test: ut_lvol_esnap_clone_bad_args ...[2024-05-15 04:42:38.724848] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:26.715 [2024-05-15 04:42:38.724894] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:26.715 [2024-05-15 04:42:38.724960] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:26.715 [2024-05-15 04:42:38.725112] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:26.715 passed 00:05:26.715 00:05:26.715 [2024-05-15 04:42:38.725158] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:26.715 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.715 suites 1 1 n/a 0 0 00:05:26.715 tests 21 21 21 0 0 00:05:26.715 asserts 712 712 712 0 n/a 00:05:26.715 00:05:26.715 Elapsed time = 0.000 seconds 00:05:26.715 04:42:38 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:26.715 00:05:26.715 00:05:26.715 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.716 http://cunit.sourceforge.net/ 00:05:26.716 00:05:26.716 00:05:26.716 Suite: zone_block 00:05:26.716 Test: test_zone_block_create ...passed 00:05:26.716 Test: test_zone_block_create_invalid ...[2024-05-15 04:42:38.788418] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:26.716 passed 00:05:26.716 Test: test_get_zone_info ...[2024-05-15 04:42:38.789048] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-15 04:42:38.789327] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:26.716 [2024-05-15 04:42:38.789434] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-15 04:42:38.789573] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:26.716 [2024-05-15 04:42:38.789657] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-15 04:42:38.789776] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:26.716 [2024-05-15 04:42:38.789881] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-15 04:42:38.790631] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.790743] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.790833] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 passed 00:05:26.716 Test: test_supported_io_types ...passed 00:05:26.716 Test: test_reset_zone ...[2024-05-15 04:42:38.791891] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.791964] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 passed 00:05:26.716 Test: test_open_zone ...[2024-05-15 04:42:38.792456] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.793485] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.793586] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 passed 00:05:26.716 Test: test_zone_write ...[2024-05-15 04:42:38.794143] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:26.716 [2024-05-15 04:42:38.794223] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.794351] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:26.716 [2024-05-15 04:42:38.794444] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.804118] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:26.716 [2024-05-15 04:42:38.804173] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.804241] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:26.716 [2024-05-15 04:42:38.804272] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.811538] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:26.716 [2024-05-15 04:42:38.811611] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 passed 00:05:26.716 Test: test_zone_read ...passed 00:05:26.716 Test: test_close_zone ...[2024-05-15 04:42:38.812193] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:26.716 [2024-05-15 04:42:38.812242] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.812302] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:26.716 [2024-05-15 04:42:38.812336] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.812731] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:26.716 [2024-05-15 04:42:38.812763] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.813059] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.813116] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.813268] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.813311] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 passed 00:05:26.716 Test: test_finish_zone ...passed 00:05:26.716 Test: test_append_zone ...[2024-05-15 04:42:38.813737] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.813780] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.814045] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:26.716 [2024-05-15 04:42:38.814075] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 [2024-05-15 04:42:38.814148] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:26.716 [2024-05-15 04:42:38.814177] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 passed 00:05:26.716 00:05:26.716 [2024-05-15 04:42:38.828517] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:26.716 [2024-05-15 04:42:38.828577] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:26.716 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.716 suites 1 1 n/a 0 0 00:05:26.716 tests 11 11 11 0 0 00:05:26.716 asserts 3437 3437 3437 0 n/a 00:05:26.716 00:05:26.716 Elapsed time = 0.040 seconds 00:05:26.716 04:42:38 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:26.716 00:05:26.716 00:05:26.716 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.716 http://cunit.sourceforge.net/ 00:05:26.716 00:05:26.716 00:05:26.716 Suite: bdev 00:05:26.716 Test: basic ...[2024-05-15 04:42:38.944957] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x51d421): Operation not permitted (rc=-1) 00:05:26.716 [2024-05-15 04:42:38.945225] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x51d3e0): Operation not permitted (rc=-1) 00:05:26.716 [2024-05-15 04:42:38.945269] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x51d421): Operation not permitted (rc=-1) 00:05:26.716 passed 00:05:26.716 Test: unregister_and_close ...passed 00:05:26.716 Test: unregister_and_close_different_threads ...passed 00:05:26.716 Test: basic_qos ...passed 00:05:26.716 Test: put_channel_during_reset ...passed 00:05:26.716 Test: aborted_reset ...passed 00:05:26.716 Test: aborted_reset_no_outstanding_io ...passed 00:05:26.716 Test: io_during_reset ...passed 00:05:26.716 Test: reset_completions ...passed 00:05:26.716 Test: io_during_qos_queue ...passed 00:05:26.716 Test: io_during_qos_reset ...passed 00:05:26.716 Test: enomem ...passed 00:05:26.716 Test: enomem_multi_bdev ...passed 00:05:26.716 Test: enomem_multi_bdev_unregister ...passed 00:05:26.716 Test: enomem_multi_io_target ...passed 00:05:26.716 Test: qos_dynamic_enable ...passed 00:05:26.716 Test: bdev_histograms_mt ...passed 00:05:26.716 Test: bdev_set_io_timeout_mt ...passed 00:05:26.716 Test: lock_lba_range_then_submit_io ...[2024-05-15 04:42:39.986334] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:26.716 [2024-05-15 04:42:40.008094] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x51d3a0 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:26.716 passed 00:05:26.716 Test: unregister_during_reset ...passed 00:05:26.716 Test: event_notify_and_close ...passed 00:05:26.716 Suite: bdev_wrong_thread 00:05:26.716 Test: spdk_bdev_register_wt ...[2024-05-15 04:42:40.179803] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8359:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:05:26.716 passed 00:05:26.716 Test: spdk_bdev_examine_wt ...[2024-05-15 04:42:40.180329] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:05:26.716 passed 00:05:26.716 00:05:26.716 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.716 suites 2 2 n/a 0 0 00:05:26.716 tests 23 23 23 0 0 00:05:26.716 asserts 601 601 601 0 n/a 00:05:26.716 00:05:26.716 Elapsed time = 1.270 seconds 00:05:26.716 00:05:26.716 real 0m5.206s 00:05:26.716 user 0m1.965s 00:05:26.717 sys 0m3.235s 00:05:26.717 04:42:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.717 04:42:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.717 ************************************ 00:05:26.717 END TEST unittest_bdev 00:05:26.717 ************************************ 00:05:26.717 04:42:40 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:26.717 04:42:40 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:26.717 04:42:40 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:26.717 04:42:40 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:26.717 04:42:40 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:05:26.717 04:42:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.717 04:42:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.717 04:42:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.717 ************************************ 00:05:26.717 START TEST unittest_blob_blobfs 00:05:26.717 ************************************ 00:05:26.717 04:42:40 -- common/autotest_common.sh@1104 -- # unittest_blob 00:05:26.717 04:42:40 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:05:26.717 04:42:40 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:05:26.717 00:05:26.717 00:05:26.717 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.717 http://cunit.sourceforge.net/ 00:05:26.717 00:05:26.717 00:05:26.717 Suite: blob_nocopy_noextent 00:05:26.717 Test: blob_init ...[2024-05-15 04:42:40.299140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:26.717 passed 00:05:26.717 Test: blob_thin_provision ...passed 00:05:26.717 Test: blob_read_only ...passed 00:05:26.717 Test: bs_load ...[2024-05-15 04:42:40.383221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:26.717 passed 00:05:26.717 Test: bs_load_custom_cluster_size ...passed 00:05:26.717 Test: bs_load_after_failed_grow ...passed 00:05:26.717 Test: bs_cluster_sz ...[2024-05-15 04:42:40.411694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:26.717 [2024-05-15 04:42:40.412314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:26.717 [2024-05-15 04:42:40.412477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:26.717 passed 00:05:26.717 Test: bs_resize_md ...passed 00:05:26.717 Test: bs_destroy ...passed 00:05:26.717 Test: bs_type ...passed 00:05:26.717 Test: bs_super_block ...passed 00:05:26.717 Test: bs_test_recover_cluster_count ...passed 00:05:26.717 Test: bs_grow_live ...passed 00:05:26.717 Test: bs_grow_live_no_space ...passed 00:05:26.717 Test: bs_test_grow ...passed 00:05:26.717 Test: blob_serialize_test ...passed 00:05:26.717 Test: super_block_crc ...passed 00:05:26.717 Test: blob_thin_prov_write_count_io ...passed 00:05:26.717 Test: bs_load_iter_test ...passed 00:05:26.717 Test: blob_relations ...[2024-05-15 04:42:40.567192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.717 [2024-05-15 04:42:40.567298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 [2024-05-15 04:42:40.568181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.717 [2024-05-15 04:42:40.568246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 passed 00:05:26.717 Test: blob_relations2 ...[2024-05-15 04:42:40.582015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.717 [2024-05-15 04:42:40.582101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 [2024-05-15 04:42:40.582149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.717 [2024-05-15 04:42:40.582168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 [2024-05-15 04:42:40.584454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.717 [2024-05-15 04:42:40.584662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 [2024-05-15 04:42:40.585527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.717 [2024-05-15 04:42:40.585658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 passed 00:05:26.717 Test: blob_relations3 ...passed 00:05:26.717 Test: blobstore_clean_power_failure ...passed 00:05:26.717 Test: blob_delete_snapshot_power_failure ...[2024-05-15 04:42:40.753344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:26.717 [2024-05-15 04:42:40.766782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:26.717 [2024-05-15 04:42:40.766876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:26.717 [2024-05-15 04:42:40.766935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 [2024-05-15 04:42:40.780288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:26.717 [2024-05-15 04:42:40.780382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:26.717 [2024-05-15 04:42:40.780451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:26.717 [2024-05-15 04:42:40.780486] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 [2024-05-15 04:42:40.793860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:26.717 [2024-05-15 04:42:40.793994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 [2024-05-15 04:42:40.806814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:26.717 [2024-05-15 04:42:40.806939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 [2024-05-15 04:42:40.822369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:26.717 [2024-05-15 04:42:40.822457] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.717 passed 00:05:26.717 Test: blob_create_snapshot_power_failure ...[2024-05-15 04:42:40.861276] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:26.717 [2024-05-15 04:42:40.887053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:26.717 [2024-05-15 04:42:40.898993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:26.717 passed 00:05:26.976 Test: blob_io_unit ...passed 00:05:26.976 Test: blob_io_unit_compatibility ...passed 00:05:26.976 Test: blob_ext_md_pages ...passed 00:05:26.976 Test: blob_esnap_io_4096_4096 ...passed 00:05:26.976 Test: blob_esnap_io_512_512 ...passed 00:05:26.976 Test: blob_esnap_io_4096_512 ...passed 00:05:26.976 Test: blob_esnap_io_512_4096 ...passed 00:05:26.976 Suite: blob_bs_nocopy_noextent 00:05:26.976 Test: blob_open ...passed 00:05:26.976 Test: blob_create ...[2024-05-15 04:42:41.137458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:26.976 passed 00:05:27.234 Test: blob_create_loop ...passed 00:05:27.234 Test: blob_create_fail ...[2024-05-15 04:42:41.242024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:27.234 passed 00:05:27.234 Test: blob_create_internal ...passed 00:05:27.234 Test: blob_create_zero_extent ...passed 00:05:27.234 Test: blob_snapshot ...passed 00:05:27.234 Test: blob_clone ...passed 00:05:27.234 Test: blob_inflate ...[2024-05-15 04:42:41.431825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:27.234 passed 00:05:27.492 Test: blob_delete ...passed 00:05:27.492 Test: blob_resize_test ...[2024-05-15 04:42:41.501761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:27.492 passed 00:05:27.492 Test: channel_ops ...passed 00:05:27.492 Test: blob_super ...passed 00:05:27.492 Test: blob_rw_verify_iov ...passed 00:05:27.492 Test: blob_unmap ...passed 00:05:27.492 Test: blob_iter ...passed 00:05:27.492 Test: blob_parse_md ...passed 00:05:27.751 Test: bs_load_pending_removal ...passed 00:05:27.751 Test: bs_unload ...[2024-05-15 04:42:41.778803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:27.751 passed 00:05:27.751 Test: bs_usable_clusters ...passed 00:05:27.751 Test: blob_crc ...[2024-05-15 04:42:41.845912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:27.751 [2024-05-15 04:42:41.846030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:27.751 passed 00:05:27.751 Test: blob_flags ...passed 00:05:27.751 Test: bs_version ...passed 00:05:27.751 Test: blob_set_xattrs_test ...[2024-05-15 04:42:41.943083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:27.751 [2024-05-15 04:42:41.943190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:27.751 passed 00:05:28.010 Test: blob_thin_prov_alloc ...passed 00:05:28.010 Test: blob_insert_cluster_msg_test ...passed 00:05:28.010 Test: blob_thin_prov_rw ...passed 00:05:28.010 Test: blob_thin_prov_rle ...passed 00:05:28.010 Test: blob_thin_prov_rw_iov ...passed 00:05:28.010 Test: blob_snapshot_rw ...passed 00:05:28.268 Test: blob_snapshot_rw_iov ...passed 00:05:28.268 Test: blob_inflate_rw ...passed 00:05:28.268 Test: blob_snapshot_freeze_io ...passed 00:05:28.528 Test: blob_operation_split_rw ...passed 00:05:28.528 Test: blob_operation_split_rw_iov ...passed 00:05:28.787 Test: blob_simultaneous_operations ...[2024-05-15 04:42:42.781028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.787 [2024-05-15 04:42:42.781179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.787 [2024-05-15 04:42:42.783560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.787 [2024-05-15 04:42:42.783635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.787 [2024-05-15 04:42:42.799617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.787 [2024-05-15 04:42:42.799687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.787 [2024-05-15 04:42:42.799987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.787 [2024-05-15 04:42:42.800031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.787 passed 00:05:28.787 Test: blob_persist_test ...passed 00:05:28.787 Test: blob_decouple_snapshot ...passed 00:05:28.787 Test: blob_seek_io_unit ...passed 00:05:28.787 Test: blob_nested_freezes ...passed 00:05:28.787 Suite: blob_blob_nocopy_noextent 00:05:29.046 Test: blob_write ...passed 00:05:29.046 Test: blob_read ...passed 00:05:29.046 Test: blob_rw_verify ...passed 00:05:29.046 Test: blob_rw_verify_iov_nomem ...passed 00:05:29.046 Test: blob_rw_iov_read_only ...passed 00:05:29.046 Test: blob_xattr ...passed 00:05:29.046 Test: blob_dirty_shutdown ...passed 00:05:29.046 Test: blob_is_degraded ...passed 00:05:29.046 Suite: blob_esnap_bs_nocopy_noextent 00:05:29.305 Test: blob_esnap_create ...passed 00:05:29.305 Test: blob_esnap_thread_add_remove ...passed 00:05:29.305 Test: blob_esnap_clone_snapshot ...passed 00:05:29.305 Test: blob_esnap_clone_inflate ...passed 00:05:29.305 Test: blob_esnap_clone_decouple ...passed 00:05:29.305 Test: blob_esnap_clone_reload ...passed 00:05:29.305 Test: blob_esnap_hotplug ...passed 00:05:29.305 Suite: blob_nocopy_extent 00:05:29.305 Test: blob_init ...[2024-05-15 04:42:43.490306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:29.305 passed 00:05:29.305 Test: blob_thin_provision ...passed 00:05:29.305 Test: blob_read_only ...passed 00:05:29.564 Test: bs_load ...[2024-05-15 04:42:43.537932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:29.564 passed 00:05:29.564 Test: bs_load_custom_cluster_size ...passed 00:05:29.564 Test: bs_load_after_failed_grow ...passed 00:05:29.564 Test: bs_cluster_sz ...[2024-05-15 04:42:43.563805] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:29.564 [2024-05-15 04:42:43.564126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:29.564 [2024-05-15 04:42:43.564198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:29.564 passed 00:05:29.564 Test: bs_resize_md ...passed 00:05:29.564 Test: bs_destroy ...passed 00:05:29.564 Test: bs_type ...passed 00:05:29.564 Test: bs_super_block ...passed 00:05:29.564 Test: bs_test_recover_cluster_count ...passed 00:05:29.564 Test: bs_grow_live ...passed 00:05:29.564 Test: bs_grow_live_no_space ...passed 00:05:29.565 Test: bs_test_grow ...passed 00:05:29.565 Test: blob_serialize_test ...passed 00:05:29.565 Test: super_block_crc ...passed 00:05:29.565 Test: blob_thin_prov_write_count_io ...passed 00:05:29.565 Test: bs_load_iter_test ...passed 00:05:29.565 Test: blob_relations ...[2024-05-15 04:42:43.712554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.565 [2024-05-15 04:42:43.712824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.565 [2024-05-15 04:42:43.714183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.565 [2024-05-15 04:42:43.714269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.565 passed 00:05:29.565 Test: blob_relations2 ...[2024-05-15 04:42:43.729657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.565 [2024-05-15 04:42:43.730023] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.565 [2024-05-15 04:42:43.730085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.565 [2024-05-15 04:42:43.730148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.565 [2024-05-15 04:42:43.732068] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.565 [2024-05-15 04:42:43.732148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.565 [2024-05-15 04:42:43.732698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.565 [2024-05-15 04:42:43.732774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.565 passed 00:05:29.565 Test: blob_relations3 ...passed 00:05:29.824 Test: blobstore_clean_power_failure ...passed 00:05:29.824 Test: blob_delete_snapshot_power_failure ...[2024-05-15 04:42:43.890173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:29.824 [2024-05-15 04:42:43.906393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:29.824 [2024-05-15 04:42:43.918172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:29.824 [2024-05-15 04:42:43.918253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:29.824 [2024-05-15 04:42:43.918284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.824 [2024-05-15 04:42:43.930033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:29.824 [2024-05-15 04:42:43.930114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:29.824 [2024-05-15 04:42:43.930153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:29.824 [2024-05-15 04:42:43.930184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.824 [2024-05-15 04:42:43.941954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:29.824 [2024-05-15 04:42:43.942032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:29.824 [2024-05-15 04:42:43.942072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:29.824 [2024-05-15 04:42:43.942129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.824 [2024-05-15 04:42:43.954050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:29.824 [2024-05-15 04:42:43.954140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.824 [2024-05-15 04:42:43.966167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:29.824 [2024-05-15 04:42:43.966299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.824 [2024-05-15 04:42:43.978195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:29.824 [2024-05-15 04:42:43.978277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.824 passed 00:05:29.824 Test: blob_create_snapshot_power_failure ...[2024-05-15 04:42:44.013601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:29.824 [2024-05-15 04:42:44.024832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:29.824 [2024-05-15 04:42:44.047591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:30.083 [2024-05-15 04:42:44.059649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:30.083 passed 00:05:30.083 Test: blob_io_unit ...passed 00:05:30.083 Test: blob_io_unit_compatibility ...passed 00:05:30.083 Test: blob_ext_md_pages ...passed 00:05:30.083 Test: blob_esnap_io_4096_4096 ...passed 00:05:30.083 Test: blob_esnap_io_512_512 ...passed 00:05:30.083 Test: blob_esnap_io_4096_512 ...passed 00:05:30.083 Test: blob_esnap_io_512_4096 ...passed 00:05:30.083 Suite: blob_bs_nocopy_extent 00:05:30.083 Test: blob_open ...passed 00:05:30.083 Test: blob_create ...[2024-05-15 04:42:44.283118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:30.083 passed 00:05:30.342 Test: blob_create_loop ...passed 00:05:30.342 Test: blob_create_fail ...[2024-05-15 04:42:44.379589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:30.342 passed 00:05:30.342 Test: blob_create_internal ...passed 00:05:30.342 Test: blob_create_zero_extent ...passed 00:05:30.342 Test: blob_snapshot ...passed 00:05:30.342 Test: blob_clone ...passed 00:05:30.342 Test: blob_inflate ...[2024-05-15 04:42:44.566704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:30.601 passed 00:05:30.601 Test: blob_delete ...passed 00:05:30.601 Test: blob_resize_test ...[2024-05-15 04:42:44.626467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:30.601 passed 00:05:30.601 Test: channel_ops ...passed 00:05:30.601 Test: blob_super ...passed 00:05:30.601 Test: blob_rw_verify_iov ...passed 00:05:30.601 Test: blob_unmap ...passed 00:05:30.601 Test: blob_iter ...passed 00:05:30.860 Test: blob_parse_md ...passed 00:05:30.860 Test: bs_load_pending_removal ...passed 00:05:30.860 Test: bs_unload ...[2024-05-15 04:42:44.898901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:30.860 passed 00:05:30.860 Test: bs_usable_clusters ...passed 00:05:30.860 Test: blob_crc ...[2024-05-15 04:42:44.963198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:30.860 [2024-05-15 04:42:44.963303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:30.860 passed 00:05:30.860 Test: blob_flags ...passed 00:05:30.860 Test: bs_version ...passed 00:05:30.860 Test: blob_set_xattrs_test ...[2024-05-15 04:42:45.060639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:30.860 [2024-05-15 04:42:45.061079] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:30.860 passed 00:05:31.118 Test: blob_thin_prov_alloc ...passed 00:05:31.118 Test: blob_insert_cluster_msg_test ...passed 00:05:31.118 Test: blob_thin_prov_rw ...passed 00:05:31.118 Test: blob_thin_prov_rle ...passed 00:05:31.118 Test: blob_thin_prov_rw_iov ...passed 00:05:31.118 Test: blob_snapshot_rw ...passed 00:05:31.118 Test: blob_snapshot_rw_iov ...passed 00:05:31.377 Test: blob_inflate_rw ...passed 00:05:31.377 Test: blob_snapshot_freeze_io ...passed 00:05:31.637 Test: blob_operation_split_rw ...passed 00:05:31.637 Test: blob_operation_split_rw_iov ...passed 00:05:31.637 Test: blob_simultaneous_operations ...[2024-05-15 04:42:45.795052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.637 [2024-05-15 04:42:45.795162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.637 [2024-05-15 04:42:45.797058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.637 [2024-05-15 04:42:45.797119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.637 [2024-05-15 04:42:45.814251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.637 [2024-05-15 04:42:45.814336] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.637 [2024-05-15 04:42:45.814454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.637 [2024-05-15 04:42:45.814481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.637 passed 00:05:31.895 Test: blob_persist_test ...passed 00:05:31.895 Test: blob_decouple_snapshot ...passed 00:05:31.895 Test: blob_seek_io_unit ...passed 00:05:31.895 Test: blob_nested_freezes ...passed 00:05:31.895 Suite: blob_blob_nocopy_extent 00:05:31.895 Test: blob_write ...passed 00:05:31.895 Test: blob_read ...passed 00:05:31.895 Test: blob_rw_verify ...passed 00:05:32.154 Test: blob_rw_verify_iov_nomem ...passed 00:05:32.154 Test: blob_rw_iov_read_only ...passed 00:05:32.154 Test: blob_xattr ...passed 00:05:32.154 Test: blob_dirty_shutdown ...passed 00:05:32.154 Test: blob_is_degraded ...passed 00:05:32.154 Suite: blob_esnap_bs_nocopy_extent 00:05:32.154 Test: blob_esnap_create ...passed 00:05:32.154 Test: blob_esnap_thread_add_remove ...passed 00:05:32.154 Test: blob_esnap_clone_snapshot ...passed 00:05:32.413 Test: blob_esnap_clone_inflate ...passed 00:05:32.413 Test: blob_esnap_clone_decouple ...passed 00:05:32.413 Test: blob_esnap_clone_reload ...passed 00:05:32.413 Test: blob_esnap_hotplug ...passed 00:05:32.413 Suite: blob_copy_noextent 00:05:32.413 Test: blob_init ...[2024-05-15 04:42:46.500621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:32.413 passed 00:05:32.413 Test: blob_thin_provision ...passed 00:05:32.413 Test: blob_read_only ...passed 00:05:32.413 Test: bs_load ...[2024-05-15 04:42:46.547323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:32.413 passed 00:05:32.413 Test: bs_load_custom_cluster_size ...passed 00:05:32.413 Test: bs_load_after_failed_grow ...passed 00:05:32.413 Test: bs_cluster_sz ...[2024-05-15 04:42:46.569991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:32.413 [2024-05-15 04:42:46.570155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:32.413 [2024-05-15 04:42:46.570193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:32.413 passed 00:05:32.413 Test: bs_resize_md ...passed 00:05:32.413 Test: bs_destroy ...passed 00:05:32.413 Test: bs_type ...passed 00:05:32.413 Test: bs_super_block ...passed 00:05:32.413 Test: bs_test_recover_cluster_count ...passed 00:05:32.413 Test: bs_grow_live ...passed 00:05:32.413 Test: bs_grow_live_no_space ...passed 00:05:32.413 Test: bs_test_grow ...passed 00:05:32.673 Test: blob_serialize_test ...passed 00:05:32.673 Test: super_block_crc ...passed 00:05:32.673 Test: blob_thin_prov_write_count_io ...passed 00:05:32.673 Test: bs_load_iter_test ...passed 00:05:32.673 Test: blob_relations ...[2024-05-15 04:42:46.702589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.673 [2024-05-15 04:42:46.702699] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.673 [2024-05-15 04:42:46.703366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.673 [2024-05-15 04:42:46.703397] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.673 passed 00:05:32.673 Test: blob_relations2 ...[2024-05-15 04:42:46.715921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.673 [2024-05-15 04:42:46.716006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.673 [2024-05-15 04:42:46.716045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.673 [2024-05-15 04:42:46.716059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.673 [2024-05-15 04:42:46.716656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.673 [2024-05-15 04:42:46.716690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.673 [2024-05-15 04:42:46.717205] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.673 [2024-05-15 04:42:46.717258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.673 passed 00:05:32.673 Test: blob_relations3 ...passed 00:05:32.673 Test: blobstore_clean_power_failure ...passed 00:05:32.673 Test: blob_delete_snapshot_power_failure ...[2024-05-15 04:42:46.859000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:32.673 [2024-05-15 04:42:46.870278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:32.673 [2024-05-15 04:42:46.870356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:32.673 [2024-05-15 04:42:46.870398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.673 [2024-05-15 04:42:46.882005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:32.673 [2024-05-15 04:42:46.882072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:32.673 [2024-05-15 04:42:46.882114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:32.673 [2024-05-15 04:42:46.882133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.673 [2024-05-15 04:42:46.893680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:32.673 [2024-05-15 04:42:46.894090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.933 [2024-05-15 04:42:46.910838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:32.933 [2024-05-15 04:42:46.910939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.933 [2024-05-15 04:42:46.923556] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:32.933 [2024-05-15 04:42:46.923681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.933 passed 00:05:32.933 Test: blob_create_snapshot_power_failure ...[2024-05-15 04:42:46.961233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:32.933 [2024-05-15 04:42:46.984953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:32.933 [2024-05-15 04:42:46.996236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:32.933 passed 00:05:32.933 Test: blob_io_unit ...passed 00:05:32.933 Test: blob_io_unit_compatibility ...passed 00:05:32.933 Test: blob_ext_md_pages ...passed 00:05:32.933 Test: blob_esnap_io_4096_4096 ...passed 00:05:32.933 Test: blob_esnap_io_512_512 ...passed 00:05:32.933 Test: blob_esnap_io_4096_512 ...passed 00:05:33.192 Test: blob_esnap_io_512_4096 ...passed 00:05:33.192 Suite: blob_bs_copy_noextent 00:05:33.192 Test: blob_open ...passed 00:05:33.192 Test: blob_create ...[2024-05-15 04:42:47.234148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:33.192 passed 00:05:33.192 Test: blob_create_loop ...passed 00:05:33.192 Test: blob_create_fail ...[2024-05-15 04:42:47.323144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:33.192 passed 00:05:33.192 Test: blob_create_internal ...passed 00:05:33.192 Test: blob_create_zero_extent ...passed 00:05:33.450 Test: blob_snapshot ...passed 00:05:33.450 Test: blob_clone ...passed 00:05:33.450 Test: blob_inflate ...[2024-05-15 04:42:47.501576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:33.450 passed 00:05:33.450 Test: blob_delete ...passed 00:05:33.450 Test: blob_resize_test ...[2024-05-15 04:42:47.571529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:33.450 passed 00:05:33.450 Test: channel_ops ...passed 00:05:33.450 Test: blob_super ...passed 00:05:33.450 Test: blob_rw_verify_iov ...passed 00:05:33.709 Test: blob_unmap ...passed 00:05:33.709 Test: blob_iter ...passed 00:05:33.709 Test: blob_parse_md ...passed 00:05:33.709 Test: bs_load_pending_removal ...passed 00:05:33.709 Test: bs_unload ...[2024-05-15 04:42:47.829970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:33.709 passed 00:05:33.709 Test: bs_usable_clusters ...passed 00:05:33.709 Test: blob_crc ...[2024-05-15 04:42:47.893798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:33.709 [2024-05-15 04:42:47.893929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:33.709 passed 00:05:33.709 Test: blob_flags ...passed 00:05:33.967 Test: bs_version ...passed 00:05:33.967 Test: blob_set_xattrs_test ...[2024-05-15 04:42:47.990120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:33.967 [2024-05-15 04:42:47.990223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:33.967 passed 00:05:33.967 Test: blob_thin_prov_alloc ...passed 00:05:33.967 Test: blob_insert_cluster_msg_test ...passed 00:05:33.967 Test: blob_thin_prov_rw ...passed 00:05:33.967 Test: blob_thin_prov_rle ...passed 00:05:34.226 Test: blob_thin_prov_rw_iov ...passed 00:05:34.226 Test: blob_snapshot_rw ...passed 00:05:34.226 Test: blob_snapshot_rw_iov ...passed 00:05:34.226 Test: blob_inflate_rw ...passed 00:05:34.511 Test: blob_snapshot_freeze_io ...passed 00:05:34.511 Test: blob_operation_split_rw ...passed 00:05:34.511 Test: blob_operation_split_rw_iov ...passed 00:05:34.511 Test: blob_simultaneous_operations ...[2024-05-15 04:42:48.701211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:34.511 [2024-05-15 04:42:48.701321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.511 [2024-05-15 04:42:48.701713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:34.511 [2024-05-15 04:42:48.702006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.511 [2024-05-15 04:42:48.704324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:34.511 [2024-05-15 04:42:48.704361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.511 [2024-05-15 04:42:48.704430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:34.511 [2024-05-15 04:42:48.704446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.770 passed 00:05:34.770 Test: blob_persist_test ...passed 00:05:34.770 Test: blob_decouple_snapshot ...passed 00:05:34.770 Test: blob_seek_io_unit ...passed 00:05:34.770 Test: blob_nested_freezes ...passed 00:05:34.770 Suite: blob_blob_copy_noextent 00:05:34.770 Test: blob_write ...passed 00:05:34.770 Test: blob_read ...passed 00:05:34.770 Test: blob_rw_verify ...passed 00:05:34.770 Test: blob_rw_verify_iov_nomem ...passed 00:05:35.028 Test: blob_rw_iov_read_only ...passed 00:05:35.028 Test: blob_xattr ...passed 00:05:35.028 Test: blob_dirty_shutdown ...passed 00:05:35.028 Test: blob_is_degraded ...passed 00:05:35.028 Suite: blob_esnap_bs_copy_noextent 00:05:35.028 Test: blob_esnap_create ...passed 00:05:35.028 Test: blob_esnap_thread_add_remove ...passed 00:05:35.028 Test: blob_esnap_clone_snapshot ...passed 00:05:35.287 Test: blob_esnap_clone_inflate ...passed 00:05:35.287 Test: blob_esnap_clone_decouple ...passed 00:05:35.287 Test: blob_esnap_clone_reload ...passed 00:05:35.287 Test: blob_esnap_hotplug ...passed 00:05:35.287 Suite: blob_copy_extent 00:05:35.287 Test: blob_init ...[2024-05-15 04:42:49.364876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:35.287 passed 00:05:35.287 Test: blob_thin_provision ...passed 00:05:35.287 Test: blob_read_only ...passed 00:05:35.287 Test: bs_load ...[2024-05-15 04:42:49.407599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:35.287 passed 00:05:35.287 Test: bs_load_custom_cluster_size ...passed 00:05:35.287 Test: bs_load_after_failed_grow ...passed 00:05:35.287 Test: bs_cluster_sz ...[2024-05-15 04:42:49.429980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:35.287 [2024-05-15 04:42:49.430116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:35.287 [2024-05-15 04:42:49.430145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:35.287 passed 00:05:35.287 Test: bs_resize_md ...passed 00:05:35.287 Test: bs_destroy ...passed 00:05:35.287 Test: bs_type ...passed 00:05:35.287 Test: bs_super_block ...passed 00:05:35.287 Test: bs_test_recover_cluster_count ...passed 00:05:35.287 Test: bs_grow_live ...passed 00:05:35.287 Test: bs_grow_live_no_space ...passed 00:05:35.287 Test: bs_test_grow ...passed 00:05:35.545 Test: blob_serialize_test ...passed 00:05:35.545 Test: super_block_crc ...passed 00:05:35.545 Test: blob_thin_prov_write_count_io ...passed 00:05:35.545 Test: bs_load_iter_test ...passed 00:05:35.545 Test: blob_relations ...[2024-05-15 04:42:49.571429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:35.545 [2024-05-15 04:42:49.571613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.545 [2024-05-15 04:42:49.573280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:35.545 [2024-05-15 04:42:49.573346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.545 passed 00:05:35.545 Test: blob_relations2 ...[2024-05-15 04:42:49.588736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:35.545 [2024-05-15 04:42:49.588846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.545 [2024-05-15 04:42:49.588929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:35.545 [2024-05-15 04:42:49.588967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.545 [2024-05-15 04:42:49.590653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:35.545 [2024-05-15 04:42:49.590736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.545 [2024-05-15 04:42:49.591221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:35.545 [2024-05-15 04:42:49.591272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.545 passed 00:05:35.545 Test: blob_relations3 ...passed 00:05:35.545 Test: blobstore_clean_power_failure ...passed 00:05:35.545 Test: blob_delete_snapshot_power_failure ...[2024-05-15 04:42:49.753122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:35.545 [2024-05-15 04:42:49.765134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:35.804 [2024-05-15 04:42:49.780542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:35.804 [2024-05-15 04:42:49.781033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:35.804 [2024-05-15 04:42:49.781125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.804 [2024-05-15 04:42:49.795988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:35.804 [2024-05-15 04:42:49.796056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:35.804 [2024-05-15 04:42:49.796093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:35.804 [2024-05-15 04:42:49.796113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.804 [2024-05-15 04:42:49.807555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:35.804 [2024-05-15 04:42:49.807618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:35.804 [2024-05-15 04:42:49.807641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:35.804 [2024-05-15 04:42:49.807662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.804 [2024-05-15 04:42:49.819243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:35.804 [2024-05-15 04:42:49.819325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.804 [2024-05-15 04:42:49.835151] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:35.804 [2024-05-15 04:42:49.835233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.804 [2024-05-15 04:42:49.847060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:35.804 [2024-05-15 04:42:49.847135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:35.804 passed 00:05:35.804 Test: blob_create_snapshot_power_failure ...[2024-05-15 04:42:49.883526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:35.804 [2024-05-15 04:42:49.895505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:35.804 [2024-05-15 04:42:49.917798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:35.804 [2024-05-15 04:42:49.929371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:35.804 passed 00:05:35.804 Test: blob_io_unit ...passed 00:05:35.804 Test: blob_io_unit_compatibility ...passed 00:05:35.804 Test: blob_ext_md_pages ...passed 00:05:35.804 Test: blob_esnap_io_4096_4096 ...passed 00:05:36.062 Test: blob_esnap_io_512_512 ...passed 00:05:36.062 Test: blob_esnap_io_4096_512 ...passed 00:05:36.062 Test: blob_esnap_io_512_4096 ...passed 00:05:36.062 Suite: blob_bs_copy_extent 00:05:36.062 Test: blob_open ...passed 00:05:36.062 Test: blob_create ...[2024-05-15 04:42:50.150587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:36.062 passed 00:05:36.062 Test: blob_create_loop ...passed 00:05:36.062 Test: blob_create_fail ...[2024-05-15 04:42:50.241417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:36.062 passed 00:05:36.062 Test: blob_create_internal ...passed 00:05:36.321 Test: blob_create_zero_extent ...passed 00:05:36.321 Test: blob_snapshot ...passed 00:05:36.321 Test: blob_clone ...passed 00:05:36.321 Test: blob_inflate ...[2024-05-15 04:42:50.421433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:36.321 passed 00:05:36.321 Test: blob_delete ...passed 00:05:36.321 Test: blob_resize_test ...[2024-05-15 04:42:50.481622] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:36.321 passed 00:05:36.321 Test: channel_ops ...passed 00:05:36.579 Test: blob_super ...passed 00:05:36.579 Test: blob_rw_verify_iov ...passed 00:05:36.579 Test: blob_unmap ...passed 00:05:36.579 Test: blob_iter ...passed 00:05:36.579 Test: blob_parse_md ...passed 00:05:36.579 Test: bs_load_pending_removal ...passed 00:05:36.579 Test: bs_unload ...[2024-05-15 04:42:50.736734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:36.579 passed 00:05:36.579 Test: bs_usable_clusters ...passed 00:05:36.579 Test: blob_crc ...[2024-05-15 04:42:50.799417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:36.579 [2024-05-15 04:42:50.799549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:36.579 passed 00:05:36.838 Test: blob_flags ...passed 00:05:36.838 Test: bs_version ...passed 00:05:36.838 Test: blob_set_xattrs_test ...[2024-05-15 04:42:50.894453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:36.838 [2024-05-15 04:42:50.894552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:36.838 passed 00:05:36.838 Test: blob_thin_prov_alloc ...passed 00:05:36.838 Test: blob_insert_cluster_msg_test ...passed 00:05:36.838 Test: blob_thin_prov_rw ...passed 00:05:36.838 Test: blob_thin_prov_rle ...passed 00:05:37.096 Test: blob_thin_prov_rw_iov ...passed 00:05:37.096 Test: blob_snapshot_rw ...passed 00:05:37.096 Test: blob_snapshot_rw_iov ...passed 00:05:37.096 Test: blob_inflate_rw ...passed 00:05:37.355 Test: blob_snapshot_freeze_io ...passed 00:05:37.355 Test: blob_operation_split_rw ...passed 00:05:37.355 Test: blob_operation_split_rw_iov ...passed 00:05:37.355 Test: blob_simultaneous_operations ...[2024-05-15 04:42:51.583601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:37.355 [2024-05-15 04:42:51.583973] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:37.355 [2024-05-15 04:42:51.584381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:37.355 [2024-05-15 04:42:51.584406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:37.355 [2024-05-15 04:42:51.586738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:37.355 [2024-05-15 04:42:51.586777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:37.355 [2024-05-15 04:42:51.586855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:37.355 [2024-05-15 04:42:51.586876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:37.614 passed 00:05:37.614 Test: blob_persist_test ...passed 00:05:37.614 Test: blob_decouple_snapshot ...passed 00:05:37.614 Test: blob_seek_io_unit ...passed 00:05:37.614 Test: blob_nested_freezes ...passed 00:05:37.614 Suite: blob_blob_copy_extent 00:05:37.614 Test: blob_write ...passed 00:05:37.614 Test: blob_read ...passed 00:05:37.614 Test: blob_rw_verify ...passed 00:05:37.873 Test: blob_rw_verify_iov_nomem ...passed 00:05:37.873 Test: blob_rw_iov_read_only ...passed 00:05:37.873 Test: blob_xattr ...passed 00:05:37.873 Test: blob_dirty_shutdown ...passed 00:05:37.873 Test: blob_is_degraded ...passed 00:05:37.873 Suite: blob_esnap_bs_copy_extent 00:05:37.873 Test: blob_esnap_create ...passed 00:05:37.873 Test: blob_esnap_thread_add_remove ...passed 00:05:38.133 Test: blob_esnap_clone_snapshot ...passed 00:05:38.133 Test: blob_esnap_clone_inflate ...passed 00:05:38.133 Test: blob_esnap_clone_decouple ...passed 00:05:38.133 Test: blob_esnap_clone_reload ...passed 00:05:38.133 Test: blob_esnap_hotplug ...passed 00:05:38.133 00:05:38.133 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.133 suites 16 16 n/a 0 0 00:05:38.133 tests 348 348 348 0 0 00:05:38.133 asserts 92605 92605 92605 0 n/a 00:05:38.133 00:05:38.133 Elapsed time = 11.870 seconds 00:05:38.133 04:42:52 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:05:38.133 00:05:38.133 00:05:38.133 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.133 http://cunit.sourceforge.net/ 00:05:38.133 00:05:38.133 00:05:38.133 Suite: blob_bdev 00:05:38.133 Test: create_bs_dev ...passed 00:05:38.133 Test: create_bs_dev_ro ...passed 00:05:38.133 Test: create_bs_dev_rw ...passed 00:05:38.133 Test: claim_bs_dev ...[2024-05-15 04:42:52.338021] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:05:38.133 passed 00:05:38.133 Test: claim_bs_dev_ro ...passed 00:05:38.133 Test: deferred_destroy_refs ...passed 00:05:38.133 Test: deferred_destroy_channels ...passed 00:05:38.133 Test: deferred_destroy_threads ...passed 00:05:38.133 00:05:38.133 [2024-05-15 04:42:52.338401] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:05:38.133 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.133 suites 1 1 n/a 0 0 00:05:38.133 tests 8 8 8 0 0 00:05:38.133 asserts 119 119 119 0 n/a 00:05:38.133 00:05:38.133 Elapsed time = 0.000 seconds 00:05:38.133 04:42:52 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:05:38.392 00:05:38.392 00:05:38.392 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.392 http://cunit.sourceforge.net/ 00:05:38.392 00:05:38.392 00:05:38.392 Suite: tree 00:05:38.392 Test: blobfs_tree_op_test ...passed 00:05:38.392 00:05:38.392 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.392 suites 1 1 n/a 0 0 00:05:38.392 tests 1 1 1 0 0 00:05:38.392 asserts 27 27 27 0 n/a 00:05:38.392 00:05:38.392 Elapsed time = 0.000 seconds 00:05:38.392 04:42:52 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:05:38.392 00:05:38.392 00:05:38.392 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.392 http://cunit.sourceforge.net/ 00:05:38.392 00:05:38.392 00:05:38.392 Suite: blobfs_async_ut 00:05:38.392 Test: fs_init ...passed 00:05:38.392 Test: fs_open ...passed 00:05:38.392 Test: fs_create ...passed 00:05:38.392 Test: fs_truncate ...passed 00:05:38.392 Test: fs_rename ...[2024-05-15 04:42:52.518442] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:05:38.392 passed 00:05:38.392 Test: fs_rw_async ...passed 00:05:38.392 Test: fs_writev_readv_async ...passed 00:05:38.392 Test: tree_find_buffer_ut ...passed 00:05:38.392 Test: channel_ops ...passed 00:05:38.392 Test: channel_ops_sync ...passed 00:05:38.392 00:05:38.392 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.392 suites 1 1 n/a 0 0 00:05:38.392 tests 10 10 10 0 0 00:05:38.392 asserts 292 292 292 0 n/a 00:05:38.392 00:05:38.392 Elapsed time = 0.160 seconds 00:05:38.392 04:42:52 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:05:38.651 00:05:38.651 00:05:38.651 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.651 http://cunit.sourceforge.net/ 00:05:38.651 00:05:38.651 00:05:38.651 Suite: blobfs_sync_ut 00:05:38.651 Test: cache_read_after_write ...[2024-05-15 04:42:52.682274] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:05:38.651 passed 00:05:38.651 Test: file_length ...passed 00:05:38.651 Test: append_write_to_extend_blob ...passed 00:05:38.651 Test: partial_buffer ...passed 00:05:38.651 Test: cache_write_null_buffer ...passed 00:05:38.651 Test: fs_create_sync ...passed 00:05:38.651 Test: fs_rename_sync ...passed 00:05:38.651 Test: cache_append_no_cache ...passed 00:05:38.651 Test: fs_delete_file_without_close ...passed 00:05:38.651 00:05:38.651 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.651 suites 1 1 n/a 0 0 00:05:38.651 tests 9 9 9 0 0 00:05:38.651 asserts 345 345 345 0 n/a 00:05:38.651 00:05:38.651 Elapsed time = 0.320 seconds 00:05:38.651 04:42:52 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:05:38.651 00:05:38.651 00:05:38.651 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.651 http://cunit.sourceforge.net/ 00:05:38.651 00:05:38.651 00:05:38.651 Suite: blobfs_bdev_ut 00:05:38.651 Test: spdk_blobfs_bdev_detect_test ...passed 00:05:38.651 Test: spdk_blobfs_bdev_create_test ...passed 00:05:38.651 Test: spdk_blobfs_bdev_mount_test ...passed 00:05:38.651 00:05:38.651 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.651 suites 1 1 n/a 0 0 00:05:38.651 tests 3 3 3 0 0 00:05:38.651 asserts 9 9 9 0 n/a 00:05:38.651 00:05:38.651 Elapsed time = 0.000 seconds 00:05:38.651 [2024-05-15 04:42:52.860267] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:05:38.651 [2024-05-15 04:42:52.860573] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:05:38.651 ************************************ 00:05:38.651 END TEST unittest_blob_blobfs 00:05:38.651 ************************************ 00:05:38.651 00:05:38.651 real 0m12.598s 00:05:38.651 user 0m11.938s 00:05:38.651 sys 0m0.779s 00:05:38.651 04:42:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.651 04:42:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.911 04:42:52 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:05:38.911 04:42:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.911 04:42:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.911 04:42:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.911 ************************************ 00:05:38.911 START TEST unittest_event 00:05:38.911 ************************************ 00:05:38.911 04:42:52 -- common/autotest_common.sh@1104 -- # unittest_event 00:05:38.911 04:42:52 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:05:38.911 00:05:38.911 00:05:38.911 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.911 http://cunit.sourceforge.net/ 00:05:38.911 00:05:38.911 00:05:38.911 Suite: app_suite 00:05:38.911 Test: test_spdk_app_parse_args ...app_ut [options] 00:05:38.911 options: 00:05:38.911 -c, --config JSON config file (default none) 00:05:38.911 --json JSON config file (default none) 00:05:38.911 --json-ignore-init-errors 00:05:38.911 don't exit on invalid config entry 00:05:38.911 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:38.911 -g, --single-file-segments 00:05:38.911 force creating just one hugetlbfs file 00:05:38.911 -h, --help show this usage 00:05:38.911 -i, --shm-id shared memory ID (optional) 00:05:38.911 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:05:38.911 --lcores lcore to CPU mapping list. The list is in the format: 00:05:38.911 [<,lcores[@CPUs]>...] 00:05:38.911 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:38.911 Within the group, '-' is used for range separator, 00:05:38.911 ',' is used for single number separator. 00:05:38.911 '( )' can be omitted for single element group, 00:05:38.911 '@' can be omitted if cpus and lcores have the same value 00:05:38.911 -n, --mem-channels channel number of memory channels used for DPDK 00:05:38.911 -p, --main-core main (primary) core for DPDK 00:05:38.911 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:38.911 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:05:38.911 --disable-cpumask-locks Disable CPU core lock files. 00:05:38.911 --silence-noticelog disable notice level logging to stderr 00:05:38.911 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:38.911 -u, --no-pci disable PCI access 00:05:38.911 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:38.911 --max-delay maximum reactor delay (in microseconds) 00:05:38.911 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:38.911 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:38.911 -R, --huge-unlink unlink huge files after initialization 00:05:38.911 -v, --version print SPDK version 00:05:38.911 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:38.911 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:38.911 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:38.911 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:05:38.911 Tracepoints vary in size and can use more than one trace entry. 00:05:38.911 --rpcs-allowed comma-separated list of permitted RPCS 00:05:38.911 --env-context Opaque context for use of the env implementation 00:05:38.911 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:38.911 --no-huge run without using hugepages 00:05:38.911 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:05:38.911 -e, --tpoint-group [:] 00:05:38.911 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:05:38.911 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:05:38.911 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:05:38.911 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:05:38.911 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:05:38.911 app_ut [options] 00:05:38.911 options: 00:05:38.911 -c, --config JSON config file (default none) 00:05:38.911 --json JSON config file (default none) 00:05:38.911 --json-ignore-init-errors 00:05:38.911 don't exit on invalid config entry 00:05:38.911 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:38.911 -g, --single-file-segments 00:05:38.911 force creating just one hugetlbfs file 00:05:38.911 -h, --help show this usage 00:05:38.911 -i, --shm-id shared memory ID (optional) 00:05:38.911 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:05:38.911 --lcores lcore to CPU mapping list. The list is in the format: 00:05:38.911 [<,lcores[@CPUs]>...] 00:05:38.911 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:38.911 Within the group, '-' is used for range separator, 00:05:38.911 ',' is used for single number separator. 00:05:38.911 '( )' can be omitted for single element group, 00:05:38.911 '@' can be omitted if cpus and lcores have the same value 00:05:38.911 -n, --mem-channels channel number of memory channels used for DPDK 00:05:38.911 -p, --main-core main (primary) core for DPDK 00:05:38.911 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:38.911 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:05:38.911 --disable-cpumask-locks Disable CPU core lock files. 00:05:38.911 --silence-noticelog disable notice level logging to stderr 00:05:38.911 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:38.911 -u, --no-pci disable PCI access 00:05:38.911 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:38.911 --max-delay maximum reactor delay (in microseconds) 00:05:38.911 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:38.911 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:38.911 -R, --huge-unlink unlink huge files after initialization 00:05:38.911 -v, --version print SPDK version 00:05:38.911 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:38.911 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:38.911 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:38.911 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:05:38.911 Tracepoints vary in size and can use more than one trace entry. 00:05:38.911 --rpcs-allowed comma-separated list of permitted RPCS 00:05:38.911 --env-context Opaque context for use of the env implementation 00:05:38.911 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:38.911 --no-huge run without using hugepages 00:05:38.911 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:05:38.911 -e, --tpoint-group [:] 00:05:38.911 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:05:38.911 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:05:38.911 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:05:38.911 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:05:38.911 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:05:38.911 app_ut [options] 00:05:38.911 options: 00:05:38.912 -c, --config JSON config file (default none) 00:05:38.912 --json JSON config file (default none) 00:05:38.912 --json-ignore-init-errors 00:05:38.912 don't exit on invalid config entry 00:05:38.912 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:38.912 -g, --single-file-segments 00:05:38.912 force creating just one hugetlbfs file 00:05:38.912 -h, --help show this usage 00:05:38.912 -i, --shm-id shared memory ID (optional) 00:05:38.912 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:05:38.912 --lcores lcore to CPU mapping list. The list is in the format: 00:05:38.912 [<,lcores[@CPUs]>...] 00:05:38.912 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:38.912 Within the group, '-' is used for range separator, 00:05:38.912 ',' is used for single number separator. 00:05:38.912 '( )' can be omitted for single element group, 00:05:38.912 '@' can be omitted if cpus and lcores have the same value 00:05:38.912 -n, --mem-channels channel number of memory channels used for DPDK 00:05:38.912 -p, --main-core main (primary) core for DPDK 00:05:38.912 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:38.912 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:05:38.912 --disable-cpumask-locks Disable CPU core lock files. 00:05:38.912 --silence-noticelog disable notice level logging to stderr 00:05:38.912 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:38.912 -u, --no-pci disable PCI access 00:05:38.912 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:38.912 --max-delay maximum reactor delay (in microseconds) 00:05:38.912 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:38.912 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:38.912 -R, --huge-unlink unlink huge files after initialization 00:05:38.912 -v, --version print SPDK version 00:05:38.912 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:38.912 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:38.912 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:38.912 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:05:38.912 Tracepoints vary in size and can use more than one trace entry. 00:05:38.912 --rpcs-allowed comma-separated list of permitted RPCS 00:05:38.912 --env-context Opaque context for use of the env implementation 00:05:38.912 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:38.912 --no-huge run without using hugepages 00:05:38.912 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:05:38.912 -e, --tpoint-group [:] 00:05:38.912 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:05:38.912 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:05:38.912 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:05:38.912 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:05:38.912 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:05:38.912 passed 00:05:38.912 00:05:38.912 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.912 suites 1 1 n/a 0 0 00:05:38.912 tests 1 1 1 0 0 00:05:38.912 asserts 8 8 8 0 n/a 00:05:38.912 00:05:38.912 Elapsed time = 0.000 seconds 00:05:38.912 app_ut: invalid option -- 'z' 00:05:38.912 app_ut: unrecognized option '--test-long-opt' 00:05:38.912 [2024-05-15 04:42:52.943548] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:05:38.912 [2024-05-15 04:42:52.943831] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:05:38.912 [2024-05-15 04:42:52.944123] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:05:38.912 04:42:52 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:05:38.912 00:05:38.912 00:05:38.912 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.912 http://cunit.sourceforge.net/ 00:05:38.912 00:05:38.912 00:05:38.912 Suite: app_suite 00:05:38.912 Test: test_create_reactor ...passed 00:05:38.912 Test: test_init_reactors ...passed 00:05:38.912 Test: test_event_call ...passed 00:05:38.912 Test: test_schedule_thread ...passed 00:05:38.912 Test: test_reschedule_thread ...passed 00:05:38.912 Test: test_bind_thread ...passed 00:05:38.912 Test: test_for_each_reactor ...passed 00:05:38.912 Test: test_reactor_stats ...passed 00:05:38.912 Test: test_scheduler ...passed 00:05:38.912 Test: test_governor ...passed 00:05:38.912 00:05:38.912 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.912 suites 1 1 n/a 0 0 00:05:38.912 tests 10 10 10 0 0 00:05:38.912 asserts 344 344 344 0 n/a 00:05:38.912 00:05:38.912 Elapsed time = 0.010 seconds 00:05:38.912 00:05:38.912 real 0m0.085s 00:05:38.912 user 0m0.048s 00:05:38.912 sys 0m0.038s 00:05:38.912 04:42:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.912 ************************************ 00:05:38.912 END TEST unittest_event 00:05:38.912 ************************************ 00:05:38.912 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:38.912 04:42:53 -- unit/unittest.sh@233 -- # uname -s 00:05:38.912 04:42:53 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:05:38.912 04:42:53 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:05:38.912 04:42:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.912 04:42:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.912 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:38.912 ************************************ 00:05:38.912 START TEST unittest_ftl 00:05:38.912 ************************************ 00:05:38.912 04:42:53 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:05:38.912 04:42:53 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:05:38.912 00:05:38.912 00:05:38.912 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.912 http://cunit.sourceforge.net/ 00:05:38.912 00:05:38.912 00:05:38.912 Suite: ftl_band_suite 00:05:38.912 Test: test_band_block_offset_from_addr_base ...passed 00:05:39.170 Test: test_band_block_offset_from_addr_offset ...passed 00:05:39.170 Test: test_band_addr_from_block_offset ...passed 00:05:39.170 Test: test_band_set_addr ...passed 00:05:39.170 Test: test_invalidate_addr ...passed 00:05:39.170 Test: test_next_xfer_addr ...passed 00:05:39.170 00:05:39.170 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.170 suites 1 1 n/a 0 0 00:05:39.170 tests 6 6 6 0 0 00:05:39.170 asserts 30356 30356 30356 0 n/a 00:05:39.170 00:05:39.170 Elapsed time = 0.210 seconds 00:05:39.170 04:42:53 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:05:39.170 00:05:39.170 00:05:39.170 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.170 http://cunit.sourceforge.net/ 00:05:39.170 00:05:39.170 00:05:39.170 Suite: ftl_bitmap 00:05:39.170 Test: test_ftl_bitmap_create ...[2024-05-15 04:42:53.399621] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:05:39.170 passed 00:05:39.170 Test: test_ftl_bitmap_get ...passed 00:05:39.170 Test: test_ftl_bitmap_set ...passed 00:05:39.170 Test: test_ftl_bitmap_clear ...passed 00:05:39.170 Test: test_ftl_bitmap_find_first_set ...passed 00:05:39.170 Test: test_ftl_bitmap_find_first_clear ...passed 00:05:39.170 Test: test_ftl_bitmap_count_set ...passed 00:05:39.170 00:05:39.170 [2024-05-15 04:42:53.400085] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:05:39.170 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.170 suites 1 1 n/a 0 0 00:05:39.170 tests 7 7 7 0 0 00:05:39.170 asserts 137 137 137 0 n/a 00:05:39.170 00:05:39.170 Elapsed time = 0.000 seconds 00:05:39.429 04:42:53 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:05:39.429 00:05:39.429 00:05:39.429 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.429 http://cunit.sourceforge.net/ 00:05:39.429 00:05:39.429 00:05:39.429 Suite: ftl_io_suite 00:05:39.429 Test: test_completion ...passed 00:05:39.429 Test: test_multiple_ios ...passed 00:05:39.429 00:05:39.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.429 suites 1 1 n/a 0 0 00:05:39.429 tests 2 2 2 0 0 00:05:39.429 asserts 47 47 47 0 n/a 00:05:39.429 00:05:39.429 Elapsed time = 0.000 seconds 00:05:39.429 04:42:53 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:05:39.429 00:05:39.429 00:05:39.429 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.429 http://cunit.sourceforge.net/ 00:05:39.429 00:05:39.429 00:05:39.429 Suite: ftl_mngt 00:05:39.429 Test: test_next_step ...passed 00:05:39.429 Test: test_continue_step ...passed 00:05:39.429 Test: test_get_func_and_step_cntx_alloc ...passed 00:05:39.429 Test: test_fail_step ...passed 00:05:39.429 Test: test_mngt_call_and_call_rollback ...passed 00:05:39.429 Test: test_nested_process_failure ...passed 00:05:39.429 00:05:39.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.429 suites 1 1 n/a 0 0 00:05:39.429 tests 6 6 6 0 0 00:05:39.429 asserts 176 176 176 0 n/a 00:05:39.429 00:05:39.429 Elapsed time = 0.000 seconds 00:05:39.429 04:42:53 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:05:39.429 00:05:39.429 00:05:39.429 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.429 http://cunit.sourceforge.net/ 00:05:39.429 00:05:39.429 00:05:39.429 Suite: ftl_mempool 00:05:39.429 Test: test_ftl_mempool_create ...passed 00:05:39.429 Test: test_ftl_mempool_get_put ...passed 00:05:39.429 00:05:39.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.429 suites 1 1 n/a 0 0 00:05:39.429 tests 2 2 2 0 0 00:05:39.429 asserts 36 36 36 0 n/a 00:05:39.429 00:05:39.429 Elapsed time = 0.000 seconds 00:05:39.429 04:42:53 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:05:39.429 00:05:39.429 00:05:39.429 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.429 http://cunit.sourceforge.net/ 00:05:39.429 00:05:39.429 00:05:39.429 Suite: ftl_addr64_suite 00:05:39.429 Test: test_addr_cached ...passed 00:05:39.429 00:05:39.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.429 suites 1 1 n/a 0 0 00:05:39.429 tests 1 1 1 0 0 00:05:39.429 asserts 1536 1536 1536 0 n/a 00:05:39.429 00:05:39.429 Elapsed time = 0.000 seconds 00:05:39.429 04:42:53 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:05:39.429 00:05:39.429 00:05:39.429 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.429 http://cunit.sourceforge.net/ 00:05:39.429 00:05:39.429 00:05:39.429 Suite: ftl_sb 00:05:39.429 Test: test_sb_crc_v2 ...passed 00:05:39.429 Test: test_sb_crc_v3 ...passed 00:05:39.429 Test: test_sb_v3_md_layout ...passed 00:05:39.429 Test: test_sb_v5_md_layout ...passed 00:05:39.429 00:05:39.429 [2024-05-15 04:42:53.528624] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:05:39.429 [2024-05-15 04:42:53.528887] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:05:39.429 [2024-05-15 04:42:53.528926] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:05:39.429 [2024-05-15 04:42:53.528962] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:05:39.429 [2024-05-15 04:42:53.528993] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:05:39.429 [2024-05-15 04:42:53.529063] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:05:39.429 [2024-05-15 04:42:53.529090] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:05:39.429 [2024-05-15 04:42:53.529137] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:05:39.429 [2024-05-15 04:42:53.529192] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:05:39.429 [2024-05-15 04:42:53.529225] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:05:39.429 [2024-05-15 04:42:53.529251] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:05:39.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.429 suites 1 1 n/a 0 0 00:05:39.429 tests 4 4 4 0 0 00:05:39.429 asserts 148 148 148 0 n/a 00:05:39.429 00:05:39.429 Elapsed time = 0.010 seconds 00:05:39.429 04:42:53 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:05:39.429 00:05:39.429 00:05:39.429 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.429 http://cunit.sourceforge.net/ 00:05:39.429 00:05:39.429 00:05:39.429 Suite: ftl_layout_upgrade 00:05:39.429 Test: test_l2p_upgrade ...passed 00:05:39.429 00:05:39.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.429 suites 1 1 n/a 0 0 00:05:39.429 tests 1 1 1 0 0 00:05:39.429 asserts 140 140 140 0 n/a 00:05:39.429 00:05:39.429 Elapsed time = 0.000 seconds 00:05:39.429 00:05:39.430 real 0m0.492s 00:05:39.430 user 0m0.205s 00:05:39.430 sys 0m0.289s 00:05:39.430 04:42:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.430 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.430 ************************************ 00:05:39.430 END TEST unittest_ftl 00:05:39.430 ************************************ 00:05:39.430 04:42:53 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:05:39.430 04:42:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.430 04:42:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.430 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.430 ************************************ 00:05:39.430 START TEST unittest_accel 00:05:39.430 ************************************ 00:05:39.430 04:42:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:05:39.430 00:05:39.430 00:05:39.430 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.430 http://cunit.sourceforge.net/ 00:05:39.430 00:05:39.430 00:05:39.430 Suite: accel_sequence 00:05:39.430 Test: test_sequence_fill_copy ...passed 00:05:39.430 Test: test_sequence_abort ...passed 00:05:39.430 Test: test_sequence_append_error ...passed 00:05:39.430 Test: test_sequence_completion_error ...passed 00:05:39.430 Test: test_sequence_copy_elision ...[2024-05-15 04:42:53.635770] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fba9904f7c0 00:05:39.430 [2024-05-15 04:42:53.636041] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fba9904f7c0 00:05:39.430 [2024-05-15 04:42:53.636079] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fba9904f7c0 00:05:39.430 [2024-05-15 04:42:53.636132] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fba9904f7c0 00:05:39.430 passed 00:05:39.430 Test: test_sequence_accel_buffers ...passed 00:05:39.430 Test: test_sequence_memory_domain ...passed 00:05:39.430 Test: test_sequence_module_memory_domain ...passed 00:05:39.430 Test: test_sequence_driver ...[2024-05-15 04:42:53.641553] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:05:39.430 [2024-05-15 04:42:53.641676] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:05:39.430 passed 00:05:39.430 Test: test_sequence_same_iovs ...passed 00:05:39.430 Test: test_sequence_crc32 ...[2024-05-15 04:42:53.645009] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fba988767c0 using driver: ut 00:05:39.430 [2024-05-15 04:42:53.645108] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fba988767c0 through driver: ut 00:05:39.430 passed 00:05:39.430 Suite: accel 00:05:39.430 Test: test_spdk_accel_task_complete ...passed 00:05:39.430 Test: test_get_task ...passed 00:05:39.430 Test: test_spdk_accel_submit_copy ...passed 00:05:39.430 Test: test_spdk_accel_submit_dualcast ...[2024-05-15 04:42:53.648968] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:05:39.430 passed 00:05:39.430 Test: test_spdk_accel_submit_compare ...passed 00:05:39.430 Test: test_spdk_accel_submit_fill ...passed 00:05:39.430 Test: test_spdk_accel_submit_crc32c ...passed 00:05:39.430 Test: test_spdk_accel_submit_crc32cv ...passed 00:05:39.430 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:05:39.430 Test: test_spdk_accel_submit_xor ...passed 00:05:39.430 Test: test_spdk_accel_module_find_by_name ...passed 00:05:39.430 Test: test_spdk_accel_module_register ...[2024-05-15 04:42:53.649035] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:05:39.430 passed 00:05:39.430 00:05:39.430 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.430 suites 2 2 n/a 0 0 00:05:39.430 tests 23 23 23 0 0 00:05:39.430 asserts 754 754 754 0 n/a 00:05:39.430 00:05:39.430 Elapsed time = 0.020 seconds 00:05:39.689 ************************************ 00:05:39.689 END TEST unittest_accel 00:05:39.689 ************************************ 00:05:39.689 00:05:39.689 real 0m0.057s 00:05:39.689 user 0m0.028s 00:05:39.689 sys 0m0.029s 00:05:39.689 04:42:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.689 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.689 04:42:53 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:05:39.689 04:42:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.689 04:42:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.689 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.689 ************************************ 00:05:39.689 START TEST unittest_ioat 00:05:39.689 ************************************ 00:05:39.689 04:42:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:05:39.689 00:05:39.689 00:05:39.689 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.689 http://cunit.sourceforge.net/ 00:05:39.689 00:05:39.689 00:05:39.689 Suite: ioat 00:05:39.689 Test: ioat_state_check ...passed 00:05:39.689 00:05:39.689 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.689 suites 1 1 n/a 0 0 00:05:39.689 tests 1 1 1 0 0 00:05:39.689 asserts 32 32 32 0 n/a 00:05:39.689 00:05:39.689 Elapsed time = 0.000 seconds 00:05:39.689 ************************************ 00:05:39.689 END TEST unittest_ioat 00:05:39.689 ************************************ 00:05:39.689 00:05:39.689 real 0m0.030s 00:05:39.689 user 0m0.014s 00:05:39.689 sys 0m0.016s 00:05:39.689 04:42:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.689 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.689 04:42:53 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:39.689 04:42:53 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:05:39.689 04:42:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.689 04:42:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.689 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.689 ************************************ 00:05:39.689 START TEST unittest_idxd_user 00:05:39.689 ************************************ 00:05:39.689 04:42:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:05:39.689 00:05:39.689 00:05:39.689 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.689 http://cunit.sourceforge.net/ 00:05:39.689 00:05:39.689 00:05:39.689 Suite: idxd_user 00:05:39.689 Test: test_idxd_wait_cmd ...[2024-05-15 04:42:53.823098] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:05:39.689 passed 00:05:39.689 Test: test_idxd_reset_dev ...passed 00:05:39.689 Test: test_idxd_group_config ...passed 00:05:39.689 Test: test_idxd_wq_config ...passed 00:05:39.689 00:05:39.689 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.689 suites 1 1 n/a 0 0 00:05:39.689 tests 4 4 4 0 0 00:05:39.689 asserts 20 20 20 0 n/a 00:05:39.689 00:05:39.689 Elapsed time = 0.000 seconds 00:05:39.689 [2024-05-15 04:42:53.823337] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:05:39.689 [2024-05-15 04:42:53.823447] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:05:39.689 [2024-05-15 04:42:53.823490] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:05:39.689 ************************************ 00:05:39.689 END TEST unittest_idxd_user 00:05:39.689 ************************************ 00:05:39.689 00:05:39.689 real 0m0.031s 00:05:39.689 user 0m0.009s 00:05:39.689 sys 0m0.022s 00:05:39.689 04:42:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.689 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.689 04:42:53 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:05:39.689 04:42:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.689 04:42:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.689 04:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.689 ************************************ 00:05:39.689 START TEST unittest_iscsi 00:05:39.689 ************************************ 00:05:39.689 04:42:53 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:05:39.689 04:42:53 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:05:39.689 00:05:39.689 00:05:39.689 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.689 http://cunit.sourceforge.net/ 00:05:39.689 00:05:39.689 00:05:39.689 Suite: conn_suite 00:05:39.689 Test: read_task_split_in_order_case ...passed 00:05:39.689 Test: read_task_split_reverse_order_case ...passed 00:05:39.689 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:05:39.689 Test: process_non_read_task_completion_test ...passed 00:05:39.689 Test: free_tasks_on_connection ...passed 00:05:39.689 Test: free_tasks_with_queued_datain ...passed 00:05:39.689 Test: abort_queued_datain_task_test ...passed 00:05:39.689 Test: abort_queued_datain_tasks_test ...passed 00:05:39.689 00:05:39.689 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.689 suites 1 1 n/a 0 0 00:05:39.689 tests 8 8 8 0 0 00:05:39.689 asserts 230 230 230 0 n/a 00:05:39.689 00:05:39.689 Elapsed time = 0.000 seconds 00:05:39.689 04:42:53 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:05:39.949 00:05:39.949 00:05:39.949 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.949 http://cunit.sourceforge.net/ 00:05:39.949 00:05:39.949 00:05:39.949 Suite: iscsi_suite 00:05:39.949 Test: param_negotiation_test ...passed 00:05:39.949 Test: list_negotiation_test ...passed 00:05:39.949 Test: parse_valid_test ...passed 00:05:39.949 Test: parse_invalid_test ...[2024-05-15 04:42:53.933967] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:05:39.949 [2024-05-15 04:42:53.934190] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:05:39.949 [2024-05-15 04:42:53.934231] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:05:39.949 [2024-05-15 04:42:53.934293] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:05:39.949 passed 00:05:39.949 00:05:39.949 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.949 suites 1 1 n/a 0 0 00:05:39.949 tests 4 4 4 0 0 00:05:39.949 asserts 161 161 161 0 n/a 00:05:39.949 00:05:39.949 Elapsed time = 0.010 seconds 00:05:39.949 [2024-05-15 04:42:53.934642] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:05:39.949 [2024-05-15 04:42:53.934756] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:05:39.949 [2024-05-15 04:42:53.934868] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:05:39.949 04:42:53 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:05:39.949 00:05:39.949 00:05:39.949 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.949 http://cunit.sourceforge.net/ 00:05:39.949 00:05:39.949 00:05:39.949 Suite: iscsi_target_node_suite 00:05:39.949 Test: add_lun_test_cases ...passed 00:05:39.949 Test: allow_any_allowed ...passed 00:05:39.949 Test: allow_ipv6_allowed ...passed 00:05:39.949 Test: allow_ipv6_denied ...passed 00:05:39.949 Test: allow_ipv6_invalid ...passed 00:05:39.949 Test: allow_ipv4_allowed ...passed 00:05:39.949 Test: allow_ipv4_denied ...passed 00:05:39.949 Test: allow_ipv4_invalid ...passed 00:05:39.949 Test: node_access_allowed ...passed 00:05:39.949 Test: node_access_denied_by_empty_netmask ...passed 00:05:39.949 Test: node_access_multi_initiator_groups_cases ...[2024-05-15 04:42:53.958282] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:05:39.949 [2024-05-15 04:42:53.958521] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:05:39.949 [2024-05-15 04:42:53.958601] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:05:39.949 [2024-05-15 04:42:53.958639] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:05:39.949 [2024-05-15 04:42:53.958664] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:05:39.949 passed 00:05:39.949 Test: allow_iscsi_name_multi_maps_case ...passed 00:05:39.949 Test: chap_param_test_cases ...passed 00:05:39.949 00:05:39.949 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.949 suites 1 1 n/a 0 0 00:05:39.949 tests 13 13 13 0 0 00:05:39.949 asserts 50 50 50 0 n/a 00:05:39.949 00:05:39.949 Elapsed time = 0.000 seconds 00:05:39.949 [2024-05-15 04:42:53.959050] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:05:39.949 [2024-05-15 04:42:53.959090] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:05:39.949 [2024-05-15 04:42:53.959145] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:05:39.949 [2024-05-15 04:42:53.959176] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:05:39.949 [2024-05-15 04:42:53.959209] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:05:39.949 04:42:53 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:05:39.949 00:05:39.949 00:05:39.949 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.949 http://cunit.sourceforge.net/ 00:05:39.949 00:05:39.949 00:05:39.949 Suite: iscsi_suite 00:05:39.949 Test: op_login_check_target_test ...passed 00:05:39.949 Test: op_login_session_normal_test ...[2024-05-15 04:42:53.994051] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:05:39.949 [2024-05-15 04:42:53.994346] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:39.949 [2024-05-15 04:42:53.994403] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:39.949 [2024-05-15 04:42:53.994445] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:39.949 [2024-05-15 04:42:53.994502] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:05:39.949 [2024-05-15 04:42:53.994606] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:05:39.949 [2024-05-15 04:42:53.994773] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:05:39.949 passed 00:05:39.949 Test: maxburstlength_test ...[2024-05-15 04:42:53.994834] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:05:39.949 passed 00:05:39.949 Test: underflow_for_read_transfer_test ...passed 00:05:39.949 Test: underflow_for_zero_read_transfer_test ...[2024-05-15 04:42:53.995068] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:05:39.949 [2024-05-15 04:42:53.995126] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:05:39.949 passed 00:05:39.949 Test: underflow_for_request_sense_test ...passed 00:05:39.949 Test: underflow_for_check_condition_test ...passed 00:05:39.949 Test: add_transfer_task_test ...passed 00:05:39.949 Test: get_transfer_task_test ...passed 00:05:39.949 Test: del_transfer_task_test ...passed 00:05:39.949 Test: clear_all_transfer_tasks_test ...passed 00:05:39.949 Test: build_iovs_test ...passed 00:05:39.949 Test: build_iovs_with_md_test ...passed 00:05:39.949 Test: pdu_hdr_op_login_test ...[2024-05-15 04:42:53.995922] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:05:39.949 [2024-05-15 04:42:53.995999] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:05:39.949 [2024-05-15 04:42:53.996052] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:05:39.949 passed 00:05:39.949 Test: pdu_hdr_op_text_test ...[2024-05-15 04:42:53.996122] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:05:39.949 [2024-05-15 04:42:53.996194] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:05:39.949 [2024-05-15 04:42:53.996240] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:05:39.949 passed 00:05:39.949 Test: pdu_hdr_op_logout_test ...[2024-05-15 04:42:53.996280] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:05:39.949 passed 00:05:39.949 Test: pdu_hdr_op_scsi_test ...[2024-05-15 04:42:53.996393] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:05:39.949 [2024-05-15 04:42:53.996426] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:05:39.949 [2024-05-15 04:42:53.996466] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:05:39.949 [2024-05-15 04:42:53.996529] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:05:39.949 passed 00:05:39.949 Test: pdu_hdr_op_task_mgmt_test ...[2024-05-15 04:42:53.996605] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:05:39.949 [2024-05-15 04:42:53.996700] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:05:39.949 [2024-05-15 04:42:53.996785] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:05:39.949 [2024-05-15 04:42:53.996835] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:05:39.949 passed 00:05:39.949 Test: pdu_hdr_op_nopout_test ...[2024-05-15 04:42:53.996961] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:05:39.949 [2024-05-15 04:42:53.997030] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:05:39.949 [2024-05-15 04:42:53.997062] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:05:39.949 [2024-05-15 04:42:53.997096] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:05:39.949 passed 00:05:39.949 Test: pdu_hdr_op_data_test ...[2024-05-15 04:42:53.997149] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:05:39.949 [2024-05-15 04:42:53.997214] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:05:39.949 [2024-05-15 04:42:53.997273] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:05:39.949 [2024-05-15 04:42:53.997326] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:05:39.949 [2024-05-15 04:42:53.997374] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:05:39.949 [2024-05-15 04:42:53.997421] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:05:39.949 passed 00:05:39.950 Test: empty_text_with_cbit_test ...[2024-05-15 04:42:53.997455] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:05:39.950 passed 00:05:39.950 Test: pdu_payload_read_test ...[2024-05-15 04:42:53.998612] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:05:39.950 passed 00:05:39.950 Test: data_out_pdu_sequence_test ...passed 00:05:39.950 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:05:39.950 00:05:39.950 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.950 suites 1 1 n/a 0 0 00:05:39.950 tests 24 24 24 0 0 00:05:39.950 asserts 150253 150253 150253 0 n/a 00:05:39.950 00:05:39.950 Elapsed time = 0.000 seconds 00:05:39.950 04:42:54 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:05:39.950 00:05:39.950 00:05:39.950 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.950 http://cunit.sourceforge.net/ 00:05:39.950 00:05:39.950 00:05:39.950 Suite: init_grp_suite 00:05:39.950 Test: create_initiator_group_success_case ...passed 00:05:39.950 Test: find_initiator_group_success_case ...passed 00:05:39.950 Test: register_initiator_group_twice_case ...passed 00:05:39.950 Test: add_initiator_name_success_case ...passed 00:05:39.950 Test: add_initiator_name_fail_case ...passed 00:05:39.950 Test: delete_all_initiator_names_success_case ...passed 00:05:39.950 Test: add_netmask_success_case ...passed 00:05:39.950 Test: add_netmask_fail_case ...[2024-05-15 04:42:54.036245] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:05:39.950 [2024-05-15 04:42:54.036747] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:05:39.950 passed 00:05:39.950 Test: delete_all_netmasks_success_case ...passed 00:05:39.950 Test: initiator_name_overwrite_all_to_any_case ...passed 00:05:39.950 Test: netmask_overwrite_all_to_any_case ...passed 00:05:39.950 Test: add_delete_initiator_names_case ...passed 00:05:39.950 Test: add_duplicated_initiator_names_case ...passed 00:05:39.950 Test: delete_nonexisting_initiator_names_case ...passed 00:05:39.950 Test: add_delete_netmasks_case ...passed 00:05:39.950 Test: add_duplicated_netmasks_case ...passed 00:05:39.950 Test: delete_nonexisting_netmasks_case ...passed 00:05:39.950 00:05:39.950 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.950 suites 1 1 n/a 0 0 00:05:39.950 tests 17 17 17 0 0 00:05:39.950 asserts 108 108 108 0 n/a 00:05:39.950 00:05:39.950 Elapsed time = 0.010 seconds 00:05:39.950 04:42:54 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:05:39.950 00:05:39.950 00:05:39.950 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.950 http://cunit.sourceforge.net/ 00:05:39.950 00:05:39.950 00:05:39.950 Suite: portal_grp_suite 00:05:39.950 Test: portal_create_ipv4_normal_case ...passed 00:05:39.950 Test: portal_create_ipv6_normal_case ...passed 00:05:39.950 Test: portal_create_ipv4_wildcard_case ...passed 00:05:39.950 Test: portal_create_ipv6_wildcard_case ...passed 00:05:39.950 Test: portal_create_twice_case ...passed 00:05:39.950 Test: portal_grp_register_unregister_case ...passed 00:05:39.950 Test: portal_grp_register_twice_case ...passed 00:05:39.950 Test: portal_grp_add_delete_case ...[2024-05-15 04:42:54.057981] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:05:39.950 passed 00:05:39.950 Test: portal_grp_add_delete_twice_case ...passed 00:05:39.950 00:05:39.950 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.950 suites 1 1 n/a 0 0 00:05:39.950 tests 9 9 9 0 0 00:05:39.950 asserts 44 44 44 0 n/a 00:05:39.950 00:05:39.950 Elapsed time = 0.000 seconds 00:05:39.950 ************************************ 00:05:39.950 END TEST unittest_iscsi 00:05:39.950 ************************************ 00:05:39.950 00:05:39.950 real 0m0.189s 00:05:39.950 user 0m0.087s 00:05:39.950 sys 0m0.104s 00:05:39.950 04:42:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.950 04:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.950 04:42:54 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:05:39.950 04:42:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.950 04:42:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.950 04:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.950 ************************************ 00:05:39.950 START TEST unittest_json 00:05:39.950 ************************************ 00:05:39.950 04:42:54 -- common/autotest_common.sh@1104 -- # unittest_json 00:05:39.950 04:42:54 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:05:39.950 00:05:39.950 00:05:39.950 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.950 http://cunit.sourceforge.net/ 00:05:39.950 00:05:39.950 00:05:39.950 Suite: json 00:05:39.950 Test: test_parse_literal ...passed 00:05:39.950 Test: test_parse_string_simple ...passed 00:05:39.950 Test: test_parse_string_control_chars ...passed 00:05:39.950 Test: test_parse_string_utf8 ...passed 00:05:39.950 Test: test_parse_string_escapes_twochar ...passed 00:05:39.950 Test: test_parse_string_escapes_unicode ...passed 00:05:39.950 Test: test_parse_number ...passed 00:05:39.950 Test: test_parse_array ...passed 00:05:39.950 Test: test_parse_object ...passed 00:05:39.950 Test: test_parse_nesting ...passed 00:05:39.950 Test: test_parse_comment ...passed 00:05:39.950 00:05:39.950 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.950 suites 1 1 n/a 0 0 00:05:39.950 tests 11 11 11 0 0 00:05:39.950 asserts 1516 1516 1516 0 n/a 00:05:39.950 00:05:39.950 Elapsed time = 0.000 seconds 00:05:39.950 04:42:54 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:05:39.950 00:05:39.950 00:05:39.950 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.950 http://cunit.sourceforge.net/ 00:05:39.950 00:05:39.950 00:05:39.950 Suite: json 00:05:39.950 Test: test_strequal ...passed 00:05:39.950 Test: test_num_to_uint16 ...passed 00:05:39.950 Test: test_num_to_int32 ...passed 00:05:39.950 Test: test_num_to_uint64 ...passed 00:05:39.950 Test: test_decode_object ...passed 00:05:39.950 Test: test_decode_array ...passed 00:05:39.950 Test: test_decode_bool ...passed 00:05:39.950 Test: test_decode_uint16 ...passed 00:05:39.950 Test: test_decode_int32 ...passed 00:05:39.950 Test: test_decode_uint32 ...passed 00:05:39.950 Test: test_decode_uint64 ...passed 00:05:39.950 Test: test_decode_string ...passed 00:05:39.950 Test: test_decode_uuid ...passed 00:05:39.950 Test: test_find ...passed 00:05:39.950 Test: test_find_array ...passed 00:05:39.950 Test: test_iterating ...passed 00:05:39.950 Test: test_free_object ...passed 00:05:39.950 00:05:39.950 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.950 suites 1 1 n/a 0 0 00:05:39.950 tests 17 17 17 0 0 00:05:39.950 asserts 236 236 236 0 n/a 00:05:39.950 00:05:39.950 Elapsed time = 0.000 seconds 00:05:40.210 04:42:54 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:05:40.210 00:05:40.210 00:05:40.210 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.210 http://cunit.sourceforge.net/ 00:05:40.210 00:05:40.210 00:05:40.210 Suite: json 00:05:40.210 Test: test_write_literal ...passed 00:05:40.210 Test: test_write_string_simple ...passed 00:05:40.210 Test: test_write_string_escapes ...passed 00:05:40.210 Test: test_write_string_utf16le ...passed 00:05:40.210 Test: test_write_number_int32 ...passed 00:05:40.210 Test: test_write_number_uint32 ...passed 00:05:40.210 Test: test_write_number_uint128 ...passed 00:05:40.210 Test: test_write_string_number_uint128 ...passed 00:05:40.210 Test: test_write_number_int64 ...passed 00:05:40.210 Test: test_write_number_uint64 ...passed 00:05:40.210 Test: test_write_number_double ...passed 00:05:40.210 Test: test_write_uuid ...passed 00:05:40.210 Test: test_write_array ...passed 00:05:40.210 Test: test_write_object ...passed 00:05:40.210 Test: test_write_nesting ...passed 00:05:40.210 Test: test_write_val ...passed 00:05:40.210 00:05:40.210 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.210 suites 1 1 n/a 0 0 00:05:40.210 tests 16 16 16 0 0 00:05:40.210 asserts 918 918 918 0 n/a 00:05:40.210 00:05:40.210 Elapsed time = 0.000 seconds 00:05:40.210 04:42:54 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:05:40.210 00:05:40.210 00:05:40.210 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.210 http://cunit.sourceforge.net/ 00:05:40.210 00:05:40.210 00:05:40.210 Suite: jsonrpc 00:05:40.210 Test: test_parse_request ...passed 00:05:40.210 Test: test_parse_request_streaming ...passed 00:05:40.210 00:05:40.210 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.210 suites 1 1 n/a 0 0 00:05:40.210 tests 2 2 2 0 0 00:05:40.210 asserts 289 289 289 0 n/a 00:05:40.210 00:05:40.210 Elapsed time = 0.000 seconds 00:05:40.210 00:05:40.210 real 0m0.109s 00:05:40.210 user 0m0.053s 00:05:40.210 sys 0m0.058s 00:05:40.210 04:42:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.210 ************************************ 00:05:40.210 END TEST unittest_json 00:05:40.210 ************************************ 00:05:40.210 04:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.210 04:42:54 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:05:40.210 04:42:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.210 04:42:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.210 04:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.210 ************************************ 00:05:40.210 START TEST unittest_rpc 00:05:40.210 ************************************ 00:05:40.210 04:42:54 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:05:40.210 04:42:54 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:05:40.210 00:05:40.210 00:05:40.210 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.210 http://cunit.sourceforge.net/ 00:05:40.210 00:05:40.210 00:05:40.210 Suite: rpc 00:05:40.210 Test: test_jsonrpc_handler ...passed 00:05:40.210 Test: test_spdk_rpc_is_method_allowed ...passed 00:05:40.210 Test: test_rpc_get_methods ...passed 00:05:40.210 Test: test_rpc_spdk_get_version ...passed 00:05:40.210 Test: test_spdk_rpc_listen_close ...passed 00:05:40.210 00:05:40.210 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.210 suites 1 1 n/a 0 0 00:05:40.210 tests 5 5 5 0 0 00:05:40.210 asserts 20 20 20 0 n/a 00:05:40.210 00:05:40.210 Elapsed time = 0.000 seconds 00:05:40.210 [2024-05-15 04:42:54.308853] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:05:40.210 00:05:40.210 real 0m0.032s 00:05:40.210 user 0m0.016s 00:05:40.210 sys 0m0.017s 00:05:40.210 ************************************ 00:05:40.210 END TEST unittest_rpc 00:05:40.210 ************************************ 00:05:40.210 04:42:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.210 04:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.210 04:42:54 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:05:40.210 04:42:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.210 04:42:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.210 04:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.210 ************************************ 00:05:40.210 START TEST unittest_notify 00:05:40.210 ************************************ 00:05:40.210 04:42:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:05:40.210 00:05:40.210 00:05:40.210 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.210 http://cunit.sourceforge.net/ 00:05:40.210 00:05:40.210 00:05:40.210 Suite: app_suite 00:05:40.210 Test: notify ...passed 00:05:40.210 00:05:40.210 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.210 suites 1 1 n/a 0 0 00:05:40.210 tests 1 1 1 0 0 00:05:40.210 asserts 13 13 13 0 n/a 00:05:40.210 00:05:40.210 Elapsed time = 0.000 seconds 00:05:40.210 00:05:40.210 real 0m0.030s 00:05:40.210 user 0m0.013s 00:05:40.210 sys 0m0.018s 00:05:40.210 04:42:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.210 ************************************ 00:05:40.210 END TEST unittest_notify 00:05:40.210 ************************************ 00:05:40.210 04:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.210 04:42:54 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:05:40.210 04:42:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.210 04:42:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.210 04:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.470 ************************************ 00:05:40.470 START TEST unittest_nvme 00:05:40.470 ************************************ 00:05:40.470 04:42:54 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:05:40.470 04:42:54 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:05:40.470 00:05:40.470 00:05:40.470 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.470 http://cunit.sourceforge.net/ 00:05:40.470 00:05:40.470 00:05:40.470 Suite: nvme 00:05:40.470 Test: test_opc_data_transfer ...passed 00:05:40.470 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:05:40.470 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:05:40.470 Test: test_trid_parse_and_compare ...passed 00:05:40.470 Test: test_trid_trtype_str ...passed 00:05:40.470 Test: test_trid_adrfam_str ...passed 00:05:40.470 Test: test_nvme_ctrlr_probe ...passed 00:05:40.470 Test: test_spdk_nvme_probe ...passed 00:05:40.470 Test: test_spdk_nvme_connect ...passed 00:05:40.470 Test: test_nvme_ctrlr_probe_internal ...passed 00:05:40.470 Test: test_nvme_init_controllers ...passed 00:05:40.470 Test: test_nvme_driver_init ...[2024-05-15 04:42:54.463481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:05:40.470 [2024-05-15 04:42:54.463753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:40.470 [2024-05-15 04:42:54.463855] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:05:40.470 [2024-05-15 04:42:54.463898] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:40.470 [2024-05-15 04:42:54.463933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:05:40.470 [2024-05-15 04:42:54.464022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:40.470 [2024-05-15 04:42:54.464316] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:05:40.470 [2024-05-15 04:42:54.464409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:40.470 [2024-05-15 04:42:54.464445] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:05:40.470 [2024-05-15 04:42:54.464491] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:05:40.470 [2024-05-15 04:42:54.464529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:05:40.470 [2024-05-15 04:42:54.464627] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:05:40.470 [2024-05-15 04:42:54.464876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:40.470 [2024-05-15 04:42:54.464930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:05:40.470 [2024-05-15 04:42:54.465070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:05:40.470 [2024-05-15 04:42:54.465109] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:05:40.470 [2024-05-15 04:42:54.465195] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:05:40.470 [2024-05-15 04:42:54.465268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:05:40.470 [2024-05-15 04:42:54.465305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:40.470 passed 00:05:40.470 Test: test_spdk_nvme_detach ...passed 00:05:40.470 Test: test_nvme_completion_poll_cb ...passed 00:05:40.470 Test: test_nvme_user_copy_cmd_complete ...[2024-05-15 04:42:54.573816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:05:40.470 [2024-05-15 04:42:54.574030] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:05:40.470 passed 00:05:40.470 Test: test_nvme_allocate_request_null ...passed 00:05:40.470 Test: test_nvme_allocate_request ...passed 00:05:40.470 Test: test_nvme_free_request ...passed 00:05:40.470 Test: test_nvme_allocate_request_user_copy ...passed 00:05:40.470 Test: test_nvme_robust_mutex_init_shared ...passed 00:05:40.470 Test: test_nvme_request_check_timeout ...passed 00:05:40.470 Test: test_nvme_wait_for_completion ...passed 00:05:40.470 Test: test_spdk_nvme_parse_func ...passed 00:05:40.470 Test: test_spdk_nvme_detach_async ...passed 00:05:40.470 Test: test_nvme_parse_addr ...[2024-05-15 04:42:54.575794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:05:40.470 passed 00:05:40.470 00:05:40.470 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.470 suites 1 1 n/a 0 0 00:05:40.470 tests 25 25 25 0 0 00:05:40.470 asserts 326 326 326 0 n/a 00:05:40.470 00:05:40.470 Elapsed time = 0.000 seconds 00:05:40.470 04:42:54 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:05:40.470 00:05:40.470 00:05:40.470 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.470 http://cunit.sourceforge.net/ 00:05:40.470 00:05:40.470 00:05:40.470 Suite: nvme_ctrlr 00:05:40.470 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-05-15 04:42:54.612790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.470 passed 00:05:40.470 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-05-15 04:42:54.614904] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.470 passed 00:05:40.470 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-05-15 04:42:54.616132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.470 passed 00:05:40.471 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-05-15 04:42:54.617338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.471 passed 00:05:40.471 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-05-15 04:42:54.618553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.471 [2024-05-15 04:42:54.619698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 04:42:54.620904] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 04:42:54.622066] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:40.471 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-05-15 04:42:54.624347] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.471 [2024-05-15 04:42:54.626567] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 04:42:54.627708] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:40.471 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-05-15 04:42:54.630014] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.471 [2024-05-15 04:42:54.631151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-15 04:42:54.633372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:40.471 Test: test_nvme_ctrlr_init_delay ...[2024-05-15 04:42:54.635630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.471 passed 00:05:40.471 Test: test_alloc_io_qpair_rr_1 ...[2024-05-15 04:42:54.636981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.471 [2024-05-15 04:42:54.637075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:05:40.471 passed 00:05:40.471 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:05:40.471 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:05:40.471 Test: test_alloc_io_qpair_wrr_1 ...passed 00:05:40.471 Test: test_alloc_io_qpair_wrr_2 ...passed 00:05:40.471 Test: test_spdk_nvme_ctrlr_update_firmware ...passed 00:05:40.471 Test: test_nvme_ctrlr_fail ...passed 00:05:40.471 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:05:40.471 Test: test_nvme_ctrlr_set_supported_features ...passed 00:05:40.471 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:05:40.471 Test: test_nvme_ctrlr_test_active_ns ...[2024-05-15 04:42:54.637540] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:40.471 [2024-05-15 04:42:54.637607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:40.471 [2024-05-15 04:42:54.637651] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:40.471 [2024-05-15 04:42:54.637835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.471 [2024-05-15 04:42:54.637913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.471 [2024-05-15 04:42:54.637968] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:05:40.471 [2024-05-15 04:42:54.638115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4832:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:05:40.471 [2024-05-15 04:42:54.638247] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:05:40.471 [2024-05-15 04:42:54.638311] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4909:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:05:40.471 [2024-05-15 04:42:54.638366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:05:40.471 [2024-05-15 04:42:54.638423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:05:40.471 [2024-05-15 04:42:54.638646] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:05:40.730 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:05:40.730 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:05:40.730 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-05-15 04:42:54.819063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-05-15 04:42:54.825863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-05-15 04:42:54.827037] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 [2024-05-15 04:42:54.827117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2869:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:05:40.730 passed 00:05:40.730 Test: test_alloc_io_qpair_fail ...[2024-05-15 04:42:54.828248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 [2024-05-15 04:42:54.828365] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_add_remove_process ...passed 00:05:40.730 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:05:40.730 Test: test_nvme_ctrlr_set_state ...[2024-05-15 04:42:54.828814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1464:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-05-15 04:42:54.828863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-05-15 04:42:54.851782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_ns_mgmt ...[2024-05-15 04:42:54.895285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_reset ...[2024-05-15 04:42:54.896995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_aer_callback ...[2024-05-15 04:42:54.897501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-05-15 04:42:54.898975] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:05:40.730 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:05:40.730 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-05-15 04:42:54.900850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:05:40.730 Test: test_nvme_ctrlr_ana_resize ...[2024-05-15 04:42:54.902498] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:05:40.730 Test: test_nvme_transport_ctrlr_ready ...passed 00:05:40.730 Test: test_nvme_ctrlr_disable ...[2024-05-15 04:42:54.904179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:05:40.730 [2024-05-15 04:42:54.904238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:05:40.730 [2024-05-15 04:42:54.904284] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:40.730 passed 00:05:40.730 00:05:40.730 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.730 suites 1 1 n/a 0 0 00:05:40.730 tests 43 43 43 0 0 00:05:40.730 asserts 10418 10418 10418 0 n/a 00:05:40.730 00:05:40.730 Elapsed time = 0.250 seconds 00:05:40.730 04:42:54 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:05:40.730 00:05:40.730 00:05:40.730 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.730 http://cunit.sourceforge.net/ 00:05:40.730 00:05:40.730 00:05:40.730 Suite: nvme_ctrlr_cmd 00:05:40.730 Test: test_get_log_pages ...passed 00:05:40.730 Test: test_set_feature_cmd ...passed 00:05:40.730 Test: test_set_feature_ns_cmd ...passed 00:05:40.730 Test: test_get_feature_cmd ...passed 00:05:40.730 Test: test_get_feature_ns_cmd ...passed 00:05:40.730 Test: test_abort_cmd ...passed 00:05:40.730 Test: test_set_host_id_cmds ...[2024-05-15 04:42:54.960176] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:05:40.730 passed 00:05:40.730 Test: test_io_cmd_raw_no_payload_build ...passed 00:05:40.730 Test: test_io_raw_cmd ...passed 00:05:40.730 Test: test_io_raw_cmd_with_md ...passed 00:05:40.730 Test: test_namespace_attach ...passed 00:05:40.730 Test: test_namespace_detach ...passed 00:05:40.730 Test: test_namespace_create ...passed 00:05:40.730 Test: test_namespace_delete ...passed 00:05:40.730 Test: test_doorbell_buffer_config ...passed 00:05:40.730 Test: test_format_nvme ...passed 00:05:40.730 Test: test_fw_commit ...passed 00:05:40.730 Test: test_fw_image_download ...passed 00:05:40.730 Test: test_sanitize ...passed 00:05:40.730 Test: test_directive ...passed 00:05:40.730 Test: test_nvme_request_add_abort ...passed 00:05:40.730 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:05:40.730 Test: test_nvme_ctrlr_cmd_identify ...passed 00:05:40.730 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:05:40.730 00:05:40.730 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.730 suites 1 1 n/a 0 0 00:05:40.730 tests 24 24 24 0 0 00:05:40.730 asserts 198 198 198 0 n/a 00:05:40.730 00:05:40.730 Elapsed time = 0.000 seconds 00:05:40.991 04:42:54 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:05:40.991 00:05:40.991 00:05:40.991 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.991 http://cunit.sourceforge.net/ 00:05:40.991 00:05:40.991 00:05:40.991 Suite: nvme_ctrlr_cmd 00:05:40.991 Test: test_geometry_cmd ...passed 00:05:40.991 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:05:40.991 00:05:40.991 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.991 suites 1 1 n/a 0 0 00:05:40.991 tests 2 2 2 0 0 00:05:40.991 asserts 7 7 7 0 n/a 00:05:40.991 00:05:40.991 Elapsed time = 0.000 seconds 00:05:40.991 04:42:55 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:05:40.991 00:05:40.991 00:05:40.991 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.991 http://cunit.sourceforge.net/ 00:05:40.991 00:05:40.991 00:05:40.991 Suite: nvme 00:05:40.991 Test: test_nvme_ns_construct ...passed 00:05:40.991 Test: test_nvme_ns_uuid ...passed 00:05:40.991 Test: test_nvme_ns_csi ...passed 00:05:40.991 Test: test_nvme_ns_data ...passed 00:05:40.991 Test: test_nvme_ns_set_identify_data ...passed 00:05:40.991 Test: test_spdk_nvme_ns_get_values ...passed 00:05:40.991 Test: test_spdk_nvme_ns_is_active ...passed 00:05:40.991 Test: spdk_nvme_ns_supports ...passed 00:05:40.991 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:05:40.991 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:05:40.991 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:05:40.991 Test: test_nvme_ns_find_id_desc ...passed 00:05:40.991 00:05:40.991 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.991 suites 1 1 n/a 0 0 00:05:40.991 tests 12 12 12 0 0 00:05:40.991 asserts 83 83 83 0 n/a 00:05:40.991 00:05:40.991 Elapsed time = 0.000 seconds 00:05:40.991 04:42:55 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:05:40.991 00:05:40.991 00:05:40.991 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.991 http://cunit.sourceforge.net/ 00:05:40.991 00:05:40.991 00:05:40.991 Suite: nvme_ns_cmd 00:05:40.991 Test: split_test ...passed 00:05:40.991 Test: split_test2 ...passed 00:05:40.991 Test: split_test3 ...passed 00:05:40.991 Test: split_test4 ...passed 00:05:40.991 Test: test_nvme_ns_cmd_flush ...passed 00:05:40.991 Test: test_nvme_ns_cmd_dataset_management ...passed 00:05:40.991 Test: test_nvme_ns_cmd_copy ...passed 00:05:40.991 Test: test_io_flags ...[2024-05-15 04:42:55.037293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:05:40.991 passed 00:05:40.991 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:05:40.991 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:05:40.991 Test: test_nvme_ns_cmd_reservation_register ...passed 00:05:40.991 Test: test_nvme_ns_cmd_reservation_release ...passed 00:05:40.991 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:05:40.991 Test: test_nvme_ns_cmd_reservation_report ...passed 00:05:40.991 Test: test_cmd_child_request ...passed 00:05:40.991 Test: test_nvme_ns_cmd_readv ...passed 00:05:40.991 Test: test_nvme_ns_cmd_read_with_md ...passed 00:05:40.991 Test: test_nvme_ns_cmd_writev ...[2024-05-15 04:42:55.039267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:05:40.991 passed 00:05:40.991 Test: test_nvme_ns_cmd_write_with_md ...passed 00:05:40.991 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:05:40.991 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:05:40.991 Test: test_nvme_ns_cmd_comparev ...passed 00:05:40.991 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:05:40.991 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:05:40.991 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:05:40.991 Test: test_nvme_ns_cmd_setup_request ...passed 00:05:40.991 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:05:40.991 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:05:40.991 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-05-15 04:42:55.041356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:05:40.991 [2024-05-15 04:42:55.041809] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:05:40.991 passed 00:05:40.991 Test: test_nvme_ns_cmd_verify ...passed 00:05:40.991 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:05:40.991 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:05:40.991 00:05:40.991 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.991 suites 1 1 n/a 0 0 00:05:40.991 tests 32 32 32 0 0 00:05:40.991 asserts 550 550 550 0 n/a 00:05:40.991 00:05:40.991 Elapsed time = 0.010 seconds 00:05:40.991 04:42:55 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:05:40.991 00:05:40.991 00:05:40.991 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.991 http://cunit.sourceforge.net/ 00:05:40.991 00:05:40.991 00:05:40.991 Suite: nvme_ns_cmd 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:05:40.991 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:05:40.991 00:05:40.991 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.991 suites 1 1 n/a 0 0 00:05:40.991 tests 12 12 12 0 0 00:05:40.991 asserts 123 123 123 0 n/a 00:05:40.991 00:05:40.991 Elapsed time = 0.010 seconds 00:05:40.991 04:42:55 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:05:40.991 00:05:40.991 00:05:40.991 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.991 http://cunit.sourceforge.net/ 00:05:40.991 00:05:40.991 00:05:40.991 Suite: nvme_qpair 00:05:40.991 Test: test3 ...passed 00:05:40.991 Test: test_ctrlr_failed ...passed 00:05:40.991 Test: struct_packing ...passed 00:05:40.991 Test: test_nvme_qpair_process_completions ...[2024-05-15 04:42:55.099007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:40.991 [2024-05-15 04:42:55.099622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:40.991 [2024-05-15 04:42:55.099691] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:05:40.991 [2024-05-15 04:42:55.100115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:05:40.991 passed 00:05:40.991 Test: test_nvme_completion_is_retry ...passed 00:05:40.991 Test: test_get_status_string ...passed 00:05:40.991 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:05:40.991 Test: test_nvme_qpair_submit_request ...passed 00:05:40.991 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:05:40.991 Test: test_nvme_qpair_manual_complete_request ...passed 00:05:40.991 Test: test_nvme_qpair_init_deinit ...[2024-05-15 04:42:55.101043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:40.991 passed 00:05:40.991 Test: test_nvme_get_sgl_print_info ...passed 00:05:40.991 00:05:40.991 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.991 suites 1 1 n/a 0 0 00:05:40.991 tests 12 12 12 0 0 00:05:40.991 asserts 154 154 154 0 n/a 00:05:40.991 00:05:40.991 Elapsed time = 0.010 seconds 00:05:40.991 04:42:55 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:05:40.991 00:05:40.991 00:05:40.991 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.991 http://cunit.sourceforge.net/ 00:05:40.991 00:05:40.991 00:05:40.991 Suite: nvme_pcie 00:05:40.991 Test: test_prp_list_append ...[2024-05-15 04:42:55.127540] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:05:40.991 [2024-05-15 04:42:55.127804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:05:40.991 [2024-05-15 04:42:55.128159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:05:40.991 [2024-05-15 04:42:55.128511] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:05:40.991 passed 00:05:40.991 Test: test_nvme_pcie_hotplug_monitor ...[2024-05-15 04:42:55.128595] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:05:40.991 passed 00:05:40.991 Test: test_shadow_doorbell_update ...passed 00:05:40.991 Test: test_build_contig_hw_sgl_request ...passed 00:05:40.991 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:05:40.991 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:05:40.991 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:05:40.991 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-05-15 04:42:55.129111] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:05:40.991 passed 00:05:40.991 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:05:40.991 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:05:40.992 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-05-15 04:42:55.129900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:05:40.992 passed 00:05:40.992 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed[2024-05-15 04:42:55.129982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:05:40.992 00:05:40.992 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-05-15 04:42:55.130345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:05:40.992 passed 00:05:40.992 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:05:40.992 00:05:40.992 [2024-05-15 04:42:55.130392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:05:40.992 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.992 suites 1 1 n/a 0 0 00:05:40.992 tests 14 14 14 0 0 00:05:40.992 asserts 235 235 235 0 n/a 00:05:40.992 00:05:40.992 Elapsed time = 0.000 seconds 00:05:40.992 04:42:55 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:05:40.992 00:05:40.992 00:05:40.992 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.992 http://cunit.sourceforge.net/ 00:05:40.992 00:05:40.992 00:05:40.992 Suite: nvme_ns_cmd 00:05:40.992 Test: nvme_poll_group_create_test ...passed 00:05:40.992 Test: nvme_poll_group_add_remove_test ...passed 00:05:40.992 Test: nvme_poll_group_process_completions ...passed 00:05:40.992 Test: nvme_poll_group_destroy_test ...passed 00:05:40.992 Test: nvme_poll_group_get_free_stats ...passed 00:05:40.992 00:05:40.992 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.992 suites 1 1 n/a 0 0 00:05:40.992 tests 5 5 5 0 0 00:05:40.992 asserts 75 75 75 0 n/a 00:05:40.992 00:05:40.992 Elapsed time = 0.000 seconds 00:05:40.992 04:42:55 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:05:40.992 00:05:40.992 00:05:40.992 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.992 http://cunit.sourceforge.net/ 00:05:40.992 00:05:40.992 00:05:40.992 Suite: nvme_quirks 00:05:40.992 Test: test_nvme_quirks_striping ...passed 00:05:40.992 00:05:40.992 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.992 suites 1 1 n/a 0 0 00:05:40.992 tests 1 1 1 0 0 00:05:40.992 asserts 5 5 5 0 n/a 00:05:40.992 00:05:40.992 Elapsed time = 0.000 seconds 00:05:40.992 04:42:55 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:05:40.992 00:05:40.992 00:05:40.992 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.992 http://cunit.sourceforge.net/ 00:05:40.992 00:05:40.992 00:05:40.992 Suite: nvme_tcp 00:05:40.992 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:05:40.992 Test: test_nvme_tcp_build_iovs ...passed 00:05:40.992 Test: test_nvme_tcp_build_sgl_request ...[2024-05-15 04:42:55.194609] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffe31442d90, and the iovcnt=16, remaining_size=28672 00:05:40.992 passed 00:05:40.992 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:05:40.992 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:05:40.992 Test: test_nvme_tcp_req_complete_safe ...passed 00:05:40.992 Test: test_nvme_tcp_req_get ...passed 00:05:40.992 Test: test_nvme_tcp_req_init ...passed 00:05:40.992 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:05:40.992 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:05:40.992 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:05:40.992 Test: test_nvme_tcp_alloc_reqs ...passed 00:05:40.992 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-05-15 04:42:55.196136] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31444aa0 is same with the state(6) to be set 00:05:40.992 [2024-05-15 04:42:55.196431] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31443c40 is same with the state(5) to be set 00:05:40.992 passed 00:05:40.992 Test: test_nvme_tcp_pdu_ch_handle ...[2024-05-15 04:42:55.196513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffe31444770 00:05:40.992 [2024-05-15 04:42:55.197022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:05:40.992 [2024-05-15 04:42:55.197115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31444100 is same with the state(5) to be set 00:05:40.992 [2024-05-15 04:42:55.197172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:05:40.992 [2024-05-15 04:42:55.197533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31444100 is same with the state(5) to be set 00:05:40.992 [2024-05-15 04:42:55.197587] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:05:40.992 [2024-05-15 04:42:55.197619] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31444100 is same with the state(5) to be set 00:05:40.992 [2024-05-15 04:42:55.197656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31444100 is same with the state(5) to be set 00:05:40.992 [2024-05-15 04:42:55.197685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31444100 is same with the state(5) to be set 00:05:40.992 passed 00:05:40.992 Test: test_nvme_tcp_qpair_connect_sock ...[2024-05-15 04:42:55.198025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31444100 is same with the state(5) to be set 00:05:40.992 [2024-05-15 04:42:55.198065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31444100 is same with the state(5) to be set 00:05:40.992 [2024-05-15 04:42:55.198106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31444100 is same with the state(5) to be set 00:05:40.992 [2024-05-15 04:42:55.198239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:05:40.992 [2024-05-15 04:42:55.198286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:05:40.992 [2024-05-15 04:42:55.198823] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:05:40.992 passed 00:05:40.992 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:05:40.992 Test: test_nvme_tcp_c2h_payload_handle ...[2024-05-15 04:42:55.198948] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffe314442b0): PDU Sequence Error 00:05:40.992 passed 00:05:40.992 Test: test_nvme_tcp_icresp_handle ...[2024-05-15 04:42:55.199359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:05:40.992 [2024-05-15 04:42:55.199404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:05:40.992 [2024-05-15 04:42:55.199437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31443c40 is same with the state(5) to be set 00:05:40.992 [2024-05-15 04:42:55.199480] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:05:40.992 [2024-05-15 04:42:55.199926] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31443c40 is same with the state(5) to be set 00:05:40.992 [2024-05-15 04:42:55.199985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe31443c40 is same with the state(0) to be set 00:05:40.992 passed 00:05:40.992 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:05:40.992 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-05-15 04:42:55.200045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffe31444770): PDU Sequence Error 00:05:40.992 [2024-05-15 04:42:55.200500] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffe31442f30 00:05:40.992 passed 00:05:40.992 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:05:40.992 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-05-15 04:42:55.200654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffe314425b0, errno=0, rc=0 00:05:40.992 [2024-05-15 04:42:55.201003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe314425b0 is same with the state(5) to be set 00:05:40.992 [2024-05-15 04:42:55.201064] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe314425b0 is same with the state(5) to be set 00:05:40.992 passed 00:05:40.992 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-05-15 04:42:55.201121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe314425b0 (0): Success 00:05:40.992 [2024-05-15 04:42:55.201158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe314425b0 (0): Success 00:05:41.252 [2024-05-15 04:42:55.284732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:05:41.252 [2024-05-15 04:42:55.284897] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:41.252 passed 00:05:41.252 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:05:41.252 Test: test_nvme_tcp_poll_group_get_stats ...[2024-05-15 04:42:55.285116] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:41.252 [2024-05-15 04:42:55.285147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:41.252 passed 00:05:41.252 Test: test_nvme_tcp_ctrlr_construct ...[2024-05-15 04:42:55.285813] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:41.252 [2024-05-15 04:42:55.286155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:41.252 [2024-05-15 04:42:55.286248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:05:41.252 [2024-05-15 04:42:55.286288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:41.252 [2024-05-15 04:42:55.286651] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:05:41.252 passed 00:05:41.252 Test: test_nvme_tcp_qpair_submit_request ...[2024-05-15 04:42:55.286745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:41.252 passed 00:05:41.252 00:05:41.252 [2024-05-15 04:42:55.287143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:05:41.252 [2024-05-15 04:42:55.287189] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:05:41.252 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.252 suites 1 1 n/a 0 0 00:05:41.252 tests 27 27 27 0 0 00:05:41.252 asserts 624 624 624 0 n/a 00:05:41.252 00:05:41.252 Elapsed time = 0.090 seconds 00:05:41.252 04:42:55 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:05:41.252 00:05:41.252 00:05:41.252 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.252 http://cunit.sourceforge.net/ 00:05:41.252 00:05:41.252 00:05:41.252 Suite: nvme_transport 00:05:41.252 Test: test_nvme_get_transport ...passed 00:05:41.252 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:05:41.252 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:05:41.252 Test: test_nvme_transport_poll_group_add_remove ...passed 00:05:41.252 Test: test_ctrlr_get_memory_domains ...passed 00:05:41.252 00:05:41.252 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.252 suites 1 1 n/a 0 0 00:05:41.252 tests 5 5 5 0 0 00:05:41.252 asserts 28 28 28 0 n/a 00:05:41.252 00:05:41.252 Elapsed time = 0.000 seconds 00:05:41.252 04:42:55 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:05:41.252 00:05:41.252 00:05:41.252 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.253 http://cunit.sourceforge.net/ 00:05:41.253 00:05:41.253 00:05:41.253 Suite: nvme_io_msg 00:05:41.253 Test: test_nvme_io_msg_send ...passed 00:05:41.253 Test: test_nvme_io_msg_process ...passed 00:05:41.253 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:05:41.253 00:05:41.253 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.253 suites 1 1 n/a 0 0 00:05:41.253 tests 3 3 3 0 0 00:05:41.253 asserts 56 56 56 0 n/a 00:05:41.253 00:05:41.253 Elapsed time = 0.000 seconds 00:05:41.253 04:42:55 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:05:41.253 00:05:41.253 00:05:41.253 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.253 http://cunit.sourceforge.net/ 00:05:41.253 00:05:41.253 00:05:41.253 Suite: nvme_pcie_common 00:05:41.253 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-05-15 04:42:55.370456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:05:41.253 passed 00:05:41.253 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:05:41.253 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:05:41.253 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-05-15 04:42:55.371549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:05:41.253 [2024-05-15 04:42:55.371956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:05:41.253 [2024-05-15 04:42:55.372003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:05:41.253 passed 00:05:41.253 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:05:41.253 Test: test_nvme_pcie_poll_group_get_stats ...[2024-05-15 04:42:55.372737] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:41.253 [2024-05-15 04:42:55.372786] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:41.253 passed 00:05:41.253 00:05:41.253 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.253 suites 1 1 n/a 0 0 00:05:41.253 tests 6 6 6 0 0 00:05:41.253 asserts 148 148 148 0 n/a 00:05:41.253 00:05:41.253 Elapsed time = 0.000 seconds 00:05:41.253 04:42:55 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:05:41.253 00:05:41.253 00:05:41.253 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.253 http://cunit.sourceforge.net/ 00:05:41.253 00:05:41.253 00:05:41.253 Suite: nvme_fabric 00:05:41.253 Test: test_nvme_fabric_prop_set_cmd ...passed 00:05:41.253 Test: test_nvme_fabric_prop_get_cmd ...passed 00:05:41.253 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:05:41.253 Test: test_nvme_fabric_discover_probe ...passed 00:05:41.253 Test: test_nvme_fabric_qpair_connect ...[2024-05-15 04:42:55.398419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:05:41.253 passed 00:05:41.253 00:05:41.253 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.253 suites 1 1 n/a 0 0 00:05:41.253 tests 5 5 5 0 0 00:05:41.253 asserts 60 60 60 0 n/a 00:05:41.253 00:05:41.253 Elapsed time = 0.000 seconds 00:05:41.253 04:42:55 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:05:41.253 00:05:41.253 00:05:41.253 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.253 http://cunit.sourceforge.net/ 00:05:41.253 00:05:41.253 00:05:41.253 Suite: nvme_opal 00:05:41.253 Test: test_opal_nvme_security_recv_send_done ...passed 00:05:41.253 Test: test_opal_add_short_atom_header ...passed 00:05:41.253 00:05:41.253 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.253 suites 1 1 n/a 0 0 00:05:41.253 tests 2 2 2 0 0 00:05:41.253 asserts 22 22 22 0 n/a 00:05:41.253 00:05:41.253 Elapsed time = 0.000 seconds 00:05:41.253 [2024-05-15 04:42:55.427201] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:05:41.253 00:05:41.253 real 0m0.992s 00:05:41.253 user 0m0.382s 00:05:41.253 sys 0m0.470s 00:05:41.253 04:42:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.253 04:42:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.253 ************************************ 00:05:41.253 END TEST unittest_nvme 00:05:41.253 ************************************ 00:05:41.253 04:42:55 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:05:41.253 04:42:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.253 04:42:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.253 04:42:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.512 ************************************ 00:05:41.512 START TEST unittest_log 00:05:41.512 ************************************ 00:05:41.512 04:42:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:05:41.512 00:05:41.512 00:05:41.512 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.512 http://cunit.sourceforge.net/ 00:05:41.512 00:05:41.512 00:05:41.512 Suite: log 00:05:41.512 Test: log_test ...passed 00:05:41.512 Test: deprecation ...[2024-05-15 04:42:55.511025] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:05:41.512 [2024-05-15 04:42:55.511196] log_ut.c: 55:log_test: *DEBUG*: log test 00:05:41.512 log dump test: 00:05:41.512 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:05:41.512 spdk dump test: 00:05:41.512 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:05:41.512 spdk dump test: 00:05:41.512 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:05:41.512 00000010 65 20 63 68 61 72 73 e chars 00:05:42.449 passed 00:05:42.449 00:05:42.449 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.449 suites 1 1 n/a 0 0 00:05:42.449 tests 2 2 2 0 0 00:05:42.449 asserts 73 73 73 0 n/a 00:05:42.450 00:05:42.450 Elapsed time = 0.000 seconds 00:05:42.450 00:05:42.450 real 0m1.033s 00:05:42.450 user 0m0.016s 00:05:42.450 sys 0m0.018s 00:05:42.450 04:42:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.450 ************************************ 00:05:42.450 END TEST unittest_log 00:05:42.450 ************************************ 00:05:42.450 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.450 04:42:56 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:05:42.450 04:42:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.450 04:42:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.450 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.450 ************************************ 00:05:42.450 START TEST unittest_lvol 00:05:42.450 ************************************ 00:05:42.450 04:42:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:05:42.450 00:05:42.450 00:05:42.450 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.450 http://cunit.sourceforge.net/ 00:05:42.450 00:05:42.450 00:05:42.450 Suite: lvol 00:05:42.450 Test: lvs_init_unload_success ...[2024-05-15 04:42:56.600069] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:05:42.450 passed 00:05:42.450 Test: lvs_init_destroy_success ...[2024-05-15 04:42:56.600510] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:05:42.450 passed 00:05:42.450 Test: lvs_init_opts_success ...passed 00:05:42.450 Test: lvs_unload_lvs_is_null_fail ...passed 00:05:42.450 Test: lvs_names ...[2024-05-15 04:42:56.600724] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:05:42.450 [2024-05-15 04:42:56.600789] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:05:42.450 [2024-05-15 04:42:56.600819] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:05:42.450 passed 00:05:42.450 Test: lvol_create_destroy_success ...passed 00:05:42.450 Test: lvol_create_fail ...passed 00:05:42.450 Test: lvol_destroy_fail ...passed 00:05:42.450 Test: lvol_close ...passed 00:05:42.450 Test: lvol_resize ...passed 00:05:42.450 Test: lvol_set_read_only ...passed 00:05:42.450 Test: test_lvs_load ...passed 00:05:42.450 Test: lvols_load ...passed 00:05:42.450 Test: lvol_open ...passed 00:05:42.450 Test: lvol_snapshot ...passed 00:05:42.450 Test: lvol_snapshot_fail ...passed 00:05:42.450 Test: lvol_clone ...passed 00:05:42.450 Test: lvol_clone_fail ...passed 00:05:42.450 Test: lvol_iter_clones ...passed 00:05:42.450 Test: lvol_refcnt ...[2024-05-15 04:42:56.600943] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:05:42.450 [2024-05-15 04:42:56.601233] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:05:42.450 [2024-05-15 04:42:56.601361] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:05:42.450 [2024-05-15 04:42:56.601601] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:05:42.450 [2024-05-15 04:42:56.601775] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:05:42.450 [2024-05-15 04:42:56.601812] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:05:42.450 [2024-05-15 04:42:56.602287] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:05:42.450 [2024-05-15 04:42:56.602318] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:05:42.450 [2024-05-15 04:42:56.602456] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:05:42.450 [2024-05-15 04:42:56.602532] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:05:42.450 [2024-05-15 04:42:56.603035] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:05:42.450 [2024-05-15 04:42:56.603460] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:05:42.450 passed 00:05:42.450 Test: lvol_names ...[2024-05-15 04:42:56.603826] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 0f892d44-dd4d-4b9b-928a-a0eb8616c30f because it is still open 00:05:42.450 [2024-05-15 04:42:56.603985] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:05:42.450 passed 00:05:42.450 Test: lvol_create_thin_provisioned ...passed 00:05:42.450 Test: lvol_rename ...passed 00:05:42.450 Test: lvs_rename ...passed 00:05:42.450 Test: lvol_inflate ...passed 00:05:42.450 Test: lvol_decouple_parent ...passed 00:05:42.450 Test: lvol_get_xattr ...passed 00:05:42.450 Test: lvol_esnap_reload ...passed 00:05:42.450 Test: lvol_esnap_create_bad_args ...passed 00:05:42.450 Test: lvol_esnap_create_delete ...passed 00:05:42.450 Test: lvol_esnap_load_esnaps ...passed 00:05:42.450 Test: lvol_esnap_missing ...passed 00:05:42.450 Test: lvol_esnap_hotplug ... 00:05:42.450 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:05:42.450 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:05:42.450 [2024-05-15 04:42:56.604084] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:42.450 [2024-05-15 04:42:56.604236] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:05:42.450 [2024-05-15 04:42:56.604504] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:42.450 [2024-05-15 04:42:56.604583] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:05:42.450 [2024-05-15 04:42:56.604757] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:05:42.450 [2024-05-15 04:42:56.604912] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:05:42.450 [2024-05-15 04:42:56.605075] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:05:42.450 [2024-05-15 04:42:56.605381] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:05:42.450 [2024-05-15 04:42:56.605411] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:05:42.450 [2024-05-15 04:42:56.605458] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:05:42.450 [2024-05-15 04:42:56.605580] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:42.450 [2024-05-15 04:42:56.605709] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:05:42.450 [2024-05-15 04:42:56.606012] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:05:42.450 [2024-05-15 04:42:56.606173] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:05:42.450 [2024-05-15 04:42:56.606218] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:05:42.450 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:05:42.450 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:05:42.450 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:05:42.450 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:05:42.450 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:05:42.450 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:05:42.450 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:05:42.450 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:05:42.450 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:05:42.450 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:05:42.450 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:05:42.450 passed 00:05:42.450 Test: lvol_get_by ...passed 00:05:42.450 00:05:42.450 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.450 suites 1 1 n/a 0 0 00:05:42.450 tests 34 34 34 0 0 00:05:42.450 asserts 1439 1439 1439 0 n/a 00:05:42.450 00:05:42.450 Elapsed time = 0.010 seconds 00:05:42.450 [2024-05-15 04:42:56.606931] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol b9afd6ab-bd96-4397-aa47-89c39e22439a: failed to create esnap bs_dev: error -12 00:05:42.450 [2024-05-15 04:42:56.607209] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 4ec18002-b259-4568-b5d2-9bc6a40e4f2a: failed to create esnap bs_dev: error -12 00:05:42.450 [2024-05-15 04:42:56.607347] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 6966b58a-e684-42e1-81bb-2b1e16e366e8: failed to create esnap bs_dev: error -12 00:05:42.450 ************************************ 00:05:42.450 END TEST unittest_lvol 00:05:42.450 ************************************ 00:05:42.450 00:05:42.450 real 0m0.048s 00:05:42.450 user 0m0.025s 00:05:42.450 sys 0m0.023s 00:05:42.450 04:42:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.450 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.450 04:42:56 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:42.450 04:42:56 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:05:42.450 04:42:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.450 04:42:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.450 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.767 ************************************ 00:05:42.767 START TEST unittest_nvme_rdma 00:05:42.767 ************************************ 00:05:42.767 04:42:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:05:42.767 00:05:42.767 00:05:42.767 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.767 http://cunit.sourceforge.net/ 00:05:42.767 00:05:42.767 00:05:42.767 Suite: nvme_rdma 00:05:42.767 Test: test_nvme_rdma_build_sgl_request ...passed 00:05:42.767 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:05:42.767 Test: test_nvme_rdma_build_contig_request ...[2024-05-15 04:42:56.702420] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:05:42.767 [2024-05-15 04:42:56.702699] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:05:42.767 [2024-05-15 04:42:56.702816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:05:42.767 [2024-05-15 04:42:56.702903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:05:42.767 passed 00:05:42.767 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:05:42.767 Test: test_nvme_rdma_create_reqs ...[2024-05-15 04:42:56.703025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:05:42.767 passed 00:05:42.767 Test: test_nvme_rdma_create_rsps ...passed 00:05:42.767 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:05:42.767 Test: test_nvme_rdma_poller_create ...passed 00:05:42.767 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:05:42.767 Test: test_nvme_rdma_ctrlr_construct ...passed 00:05:42.767 Test: test_nvme_rdma_req_put_and_get ...passed 00:05:42.767 Test: test_nvme_rdma_req_init ...passed 00:05:42.767 Test: test_nvme_rdma_validate_cm_event ...[2024-05-15 04:42:56.703360] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:05:42.767 [2024-05-15 04:42:56.703490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:05:42.767 [2024-05-15 04:42:56.703544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:42.767 [2024-05-15 04:42:56.703728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:05:42.767 [2024-05-15 04:42:56.704054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:05:42.767 passed 00:05:42.767 Test: test_nvme_rdma_qpair_init ...passed 00:05:42.767 Test: test_nvme_rdma_qpair_submit_request ...passed 00:05:42.767 Test: test_nvme_rdma_memory_domain ...passed 00:05:42.767 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:05:42.767 Test: test_rdma_get_memory_translation ...[2024-05-15 04:42:56.704097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:05:42.767 [2024-05-15 04:42:56.704193] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:05:42.767 [2024-05-15 04:42:56.704256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:05:42.767 [2024-05-15 04:42:56.704327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:05:42.767 passed 00:05:42.767 Test: test_get_rdma_qpair_from_wc ...passed 00:05:42.767 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:05:42.767 Test: test_nvme_rdma_poll_group_get_stats ...[2024-05-15 04:42:56.704464] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:42.767 [2024-05-15 04:42:56.704518] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:42.767 passed 00:05:42.767 Test: test_nvme_rdma_qpair_set_poller ...passed 00:05:42.767 00:05:42.767 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.767 suites 1 1 n/a 0 0 00:05:42.767 tests 22 22 22 0 0 00:05:42.767 asserts 412 412 412 0 n/a 00:05:42.767 00:05:42.767 Elapsed time = 0.000 seconds 00:05:42.767 [2024-05-15 04:42:56.704648] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:05:42.767 [2024-05-15 04:42:56.704686] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:05:42.767 [2024-05-15 04:42:56.704734] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd02511d80 on poll group 0x60b0000001a0 00:05:42.767 [2024-05-15 04:42:56.704797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:05:42.767 [2024-05-15 04:42:56.704843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:05:42.767 [2024-05-15 04:42:56.704870] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd02511d80 on poll group 0x60b0000001a0 00:05:42.767 [2024-05-15 04:42:56.704935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:05:42.767 00:05:42.767 real 0m0.039s 00:05:42.767 user 0m0.014s 00:05:42.767 sys 0m0.025s 00:05:42.767 04:42:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.767 ************************************ 00:05:42.767 END TEST unittest_nvme_rdma 00:05:42.767 ************************************ 00:05:42.767 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.767 04:42:56 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:05:42.767 04:42:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.767 04:42:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.767 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.767 ************************************ 00:05:42.767 START TEST unittest_nvmf_transport 00:05:42.767 ************************************ 00:05:42.767 04:42:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:05:42.767 00:05:42.767 00:05:42.767 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.767 http://cunit.sourceforge.net/ 00:05:42.767 00:05:42.767 00:05:42.767 Suite: nvmf 00:05:42.767 Test: test_spdk_nvmf_transport_create ...[2024-05-15 04:42:56.796521] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:05:42.767 passed 00:05:42.767 Test: test_nvmf_transport_poll_group_create ...passed 00:05:42.767 Test: test_spdk_nvmf_transport_opts_init ...passed 00:05:42.767 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:05:42.767 00:05:42.767 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.767 suites 1 1 n/a 0 0 00:05:42.767 tests 4 4 4 0 0 00:05:42.767 asserts 49 49 49 0 n/a 00:05:42.767 00:05:42.767 Elapsed time = 0.000 seconds 00:05:42.767 [2024-05-15 04:42:56.797002] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:05:42.767 [2024-05-15 04:42:56.797057] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:05:42.767 [2024-05-15 04:42:56.797186] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:05:42.767 [2024-05-15 04:42:56.797346] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:05:42.767 [2024-05-15 04:42:56.797461] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:05:42.767 [2024-05-15 04:42:56.797489] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:05:42.767 00:05:42.767 real 0m0.040s 00:05:42.767 user 0m0.018s 00:05:42.767 sys 0m0.022s 00:05:42.767 04:42:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.767 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.767 ************************************ 00:05:42.767 END TEST unittest_nvmf_transport 00:05:42.767 ************************************ 00:05:42.768 04:42:56 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:05:42.768 04:42:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.768 04:42:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.768 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.768 ************************************ 00:05:42.768 START TEST unittest_rdma 00:05:42.768 ************************************ 00:05:42.768 04:42:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:05:42.768 00:05:42.768 00:05:42.768 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.768 http://cunit.sourceforge.net/ 00:05:42.768 00:05:42.768 00:05:42.768 Suite: rdma_common 00:05:42.768 Test: test_spdk_rdma_pd ...passed 00:05:42.768 00:05:42.768 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.768 suites 1 1 n/a 0 0 00:05:42.768 tests 1 1 1 0 0 00:05:42.768 asserts 31 31 31 0 n/a 00:05:42.768 00:05:42.768 Elapsed time = 0.000 seconds 00:05:42.768 [2024-05-15 04:42:56.886239] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:05:42.768 [2024-05-15 04:42:56.886550] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:05:42.768 ************************************ 00:05:42.768 END TEST unittest_rdma 00:05:42.768 ************************************ 00:05:42.768 00:05:42.768 real 0m0.033s 00:05:42.768 user 0m0.015s 00:05:42.768 sys 0m0.018s 00:05:42.768 04:42:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.768 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.768 04:42:56 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:42.768 04:42:56 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:05:42.768 04:42:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.768 04:42:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.768 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.768 ************************************ 00:05:42.768 START TEST unittest_nvme_cuse 00:05:42.768 ************************************ 00:05:42.768 04:42:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:05:42.768 00:05:42.768 00:05:42.768 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.768 http://cunit.sourceforge.net/ 00:05:42.768 00:05:42.768 00:05:42.768 Suite: nvme_cuse 00:05:42.768 Test: test_cuse_nvme_submit_io_read_write ...passed 00:05:42.768 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:05:42.768 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:05:42.768 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:05:42.768 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:05:42.768 Test: test_cuse_nvme_submit_io ...passed 00:05:42.768 Test: test_cuse_nvme_reset ...[2024-05-15 04:42:56.981564] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:05:42.768 [2024-05-15 04:42:56.981830] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:05:42.768 passed 00:05:42.768 Test: test_nvme_cuse_stop ...passed 00:05:42.768 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:05:42.768 00:05:42.768 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.768 suites 1 1 n/a 0 0 00:05:42.768 tests 9 9 9 0 0 00:05:42.768 asserts 121 121 121 0 n/a 00:05:42.768 00:05:42.768 Elapsed time = 0.000 seconds 00:05:42.768 00:05:42.768 real 0m0.037s 00:05:42.768 user 0m0.017s 00:05:42.768 sys 0m0.020s 00:05:42.768 04:42:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.768 ************************************ 00:05:42.768 END TEST unittest_nvme_cuse 00:05:42.768 ************************************ 00:05:42.768 04:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:43.027 04:42:57 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:05:43.027 04:42:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.027 04:42:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.027 04:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.027 ************************************ 00:05:43.027 START TEST unittest_nvmf 00:05:43.027 ************************************ 00:05:43.027 04:42:57 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:05:43.027 04:42:57 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:05:43.027 00:05:43.028 00:05:43.028 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.028 http://cunit.sourceforge.net/ 00:05:43.028 00:05:43.028 00:05:43.028 Suite: nvmf 00:05:43.028 Test: test_get_log_page ...[2024-05-15 04:42:57.067888] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:05:43.028 passed 00:05:43.028 Test: test_process_fabrics_cmd ...passed 00:05:43.028 Test: test_connect ...[2024-05-15 04:42:57.068607] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:05:43.028 [2024-05-15 04:42:57.068754] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:05:43.028 [2024-05-15 04:42:57.068815] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:05:43.028 [2024-05-15 04:42:57.068846] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:05:43.028 [2024-05-15 04:42:57.068947] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:05:43.028 [2024-05-15 04:42:57.068996] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:05:43.028 [2024-05-15 04:42:57.069118] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:05:43.028 [2024-05-15 04:42:57.069158] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:05:43.028 [2024-05-15 04:42:57.069228] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:05:43.028 [2024-05-15 04:42:57.069271] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:05:43.028 [2024-05-15 04:42:57.069409] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:05:43.028 [2024-05-15 04:42:57.069447] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:05:43.028 passed 00:05:43.028 Test: test_get_ns_id_desc_list ...passed 00:05:43.028 Test: test_identify_ns ...[2024-05-15 04:42:57.069518] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:05:43.028 [2024-05-15 04:42:57.069565] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:05:43.028 [2024-05-15 04:42:57.069630] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:05:43.028 [2024-05-15 04:42:57.069708] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:05:43.028 [2024-05-15 04:42:57.069932] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:43.028 passed 00:05:43.028 Test: test_identify_ns_iocs_specific ...[2024-05-15 04:42:57.070073] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:05:43.028 [2024-05-15 04:42:57.070177] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:05:43.028 [2024-05-15 04:42:57.070279] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:43.028 passed 00:05:43.028 Test: test_reservation_write_exclusive ...[2024-05-15 04:42:57.070455] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:43.028 passed 00:05:43.028 Test: test_reservation_exclusive_access ...passed 00:05:43.028 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:05:43.028 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:05:43.028 Test: test_reservation_notification_log_page ...passed 00:05:43.028 Test: test_get_dif_ctx ...passed 00:05:43.028 Test: test_set_get_features ...passed 00:05:43.028 Test: test_identify_ctrlr ...passed 00:05:43.028 Test: test_identify_ctrlr_iocs_specific ...[2024-05-15 04:42:57.071208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:05:43.028 [2024-05-15 04:42:57.071244] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:05:43.028 [2024-05-15 04:42:57.071284] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:05:43.028 [2024-05-15 04:42:57.071341] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:05:43.028 passed 00:05:43.028 Test: test_custom_admin_cmd ...passed 00:05:43.028 Test: test_fused_compare_and_write ...[2024-05-15 04:42:57.071684] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:05:43.028 passed 00:05:43.028 Test: test_multi_async_event_reqs ...passed 00:05:43.028 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:05:43.028 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:05:43.028 Test: test_multi_async_events ...passed 00:05:43.028 Test: test_rae ...passed 00:05:43.028 Test: test_nvmf_ctrlr_create_destruct ...passed 00:05:43.028 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:05:43.028 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:05:43.028 Test: test_zcopy_read ...passed 00:05:43.028 Test: test_zcopy_write ...passed 00:05:43.028 Test: test_nvmf_property_set ...passed 00:05:43.028 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:05:43.028 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:05:43.028 00:05:43.028 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.028 suites 1 1 n/a 0 0 00:05:43.028 tests 30 30 30 0 0 00:05:43.028 asserts 885 885 885 0 n/a 00:05:43.028 00:05:43.028 Elapsed time = 0.000 seconds 00:05:43.028 [2024-05-15 04:42:57.071902] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:05:43.028 [2024-05-15 04:42:57.071950] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:05:43.028 [2024-05-15 04:42:57.072297] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:05:43.028 [2024-05-15 04:42:57.072472] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:05:43.028 [2024-05-15 04:42:57.072531] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:05:43.028 [2024-05-15 04:42:57.072576] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:05:43.028 [2024-05-15 04:42:57.072623] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:05:43.028 [2024-05-15 04:42:57.072659] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:05:43.028 04:42:57 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:05:43.028 00:05:43.028 00:05:43.028 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.028 http://cunit.sourceforge.net/ 00:05:43.028 00:05:43.028 00:05:43.028 Suite: nvmf 00:05:43.028 Test: test_get_rw_params ...passed 00:05:43.028 Test: test_lba_in_range ...passed 00:05:43.028 Test: test_get_dif_ctx ...passed 00:05:43.028 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:05:43.028 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-05-15 04:42:57.101668] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:05:43.028 [2024-05-15 04:42:57.101926] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:05:43.028 [2024-05-15 04:42:57.102032] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:05:43.028 passed 00:05:43.028 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-05-15 04:42:57.102115] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:05:43.028 [2024-05-15 04:42:57.102209] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:05:43.028 passed 00:05:43.028 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-05-15 04:42:57.102319] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:05:43.028 [2024-05-15 04:42:57.102360] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:05:43.028 [2024-05-15 04:42:57.102416] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:05:43.028 [2024-05-15 04:42:57.102455] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:05:43.028 passed 00:05:43.028 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:05:43.028 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:05:43.028 00:05:43.028 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.028 suites 1 1 n/a 0 0 00:05:43.028 tests 9 9 9 0 0 00:05:43.028 asserts 157 157 157 0 n/a 00:05:43.028 00:05:43.028 Elapsed time = 0.000 seconds 00:05:43.028 04:42:57 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:05:43.028 00:05:43.028 00:05:43.028 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.028 http://cunit.sourceforge.net/ 00:05:43.028 00:05:43.028 00:05:43.028 Suite: nvmf 00:05:43.028 Test: test_discovery_log ...passed 00:05:43.028 Test: test_discovery_log_with_filters ...passed 00:05:43.028 00:05:43.028 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.028 suites 1 1 n/a 0 0 00:05:43.028 tests 2 2 2 0 0 00:05:43.028 asserts 238 238 238 0 n/a 00:05:43.028 00:05:43.028 Elapsed time = 0.000 seconds 00:05:43.028 04:42:57 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:05:43.028 00:05:43.028 00:05:43.028 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.028 http://cunit.sourceforge.net/ 00:05:43.028 00:05:43.028 00:05:43.028 Suite: nvmf 00:05:43.028 Test: nvmf_test_create_subsystem ...[2024-05-15 04:42:57.170517] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:05:43.028 [2024-05-15 04:42:57.170773] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:05:43.029 [2024-05-15 04:42:57.170861] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:05:43.029 [2024-05-15 04:42:57.170901] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:05:43.029 [2024-05-15 04:42:57.170935] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:05:43.029 [2024-05-15 04:42:57.170972] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:05:43.029 [2024-05-15 04:42:57.171039] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:05:43.029 [2024-05-15 04:42:57.171205] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:05:43.029 [2024-05-15 04:42:57.171286] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:05:43.029 [2024-05-15 04:42:57.171328] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:05:43.029 [2024-05-15 04:42:57.171363] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:05:43.029 passed 00:05:43.029 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:05:43.029 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:05:43.029 Test: test_reservation_register ...[2024-05-15 04:42:57.171627] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:05:43.029 [2024-05-15 04:42:57.171769] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1734:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:05:43.029 [2024-05-15 04:42:57.172076] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:43.029 passed 00:05:43.029 Test: test_reservation_register_with_ptpl ...[2024-05-15 04:42:57.172172] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2841:nvmf_ns_reservation_register: *ERROR*: No registrant 00:05:43.029 passed 00:05:43.029 Test: test_reservation_acquire_preempt_1 ...passed 00:05:43.029 Test: test_reservation_acquire_release_with_ptpl ...[2024-05-15 04:42:57.173094] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:43.029 passed 00:05:43.029 Test: test_reservation_release ...passed 00:05:43.029 Test: test_reservation_unregister_notification ...[2024-05-15 04:42:57.174665] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:43.029 [2024-05-15 04:42:57.174852] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:43.029 passed 00:05:43.029 Test: test_reservation_release_notification ...[2024-05-15 04:42:57.175071] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:43.029 passed 00:05:43.029 Test: test_reservation_release_notification_write_exclusive ...passed 00:05:43.029 Test: test_reservation_clear_notification ...[2024-05-15 04:42:57.175296] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:43.029 [2024-05-15 04:42:57.175463] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:43.029 passed 00:05:43.029 Test: test_reservation_preempt_notification ...passed 00:05:43.029 Test: test_spdk_nvmf_ns_event ...[2024-05-15 04:42:57.175652] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:43.029 passed 00:05:43.029 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:05:43.029 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:05:43.029 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:05:43.029 Test: test_nvmf_ns_reservation_report ...[2024-05-15 04:42:57.176168] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:05:43.029 [2024-05-15 04:42:57.176254] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:05:43.029 passed 00:05:43.029 Test: test_nvmf_nqn_is_valid ...[2024-05-15 04:42:57.176373] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3146:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:05:43.029 [2024-05-15 04:42:57.176457] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:05:43.029 [2024-05-15 04:42:57.176503] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:dd0c5778-39c3-48a4-b1a7-06657ee2e15": uuid is not the correct length 00:05:43.029 [2024-05-15 04:42:57.176542] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:05:43.029 passed 00:05:43.029 Test: test_nvmf_ns_reservation_restore ...passed 00:05:43.029 Test: test_nvmf_subsystem_state_change ...[2024-05-15 04:42:57.176723] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2340:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:05:43.029 passed 00:05:43.029 Test: test_nvmf_reservation_custom_ops ...passed 00:05:43.029 00:05:43.029 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.029 suites 1 1 n/a 0 0 00:05:43.029 tests 22 22 22 0 0 00:05:43.029 asserts 405 405 405 0 n/a 00:05:43.029 00:05:43.029 Elapsed time = 0.010 seconds 00:05:43.029 04:42:57 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:05:43.029 00:05:43.029 00:05:43.029 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.029 http://cunit.sourceforge.net/ 00:05:43.029 00:05:43.029 00:05:43.029 Suite: nvmf 00:05:43.029 Test: test_nvmf_tcp_create ...[2024-05-15 04:42:57.234436] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:05:43.029 passed 00:05:43.289 Test: test_nvmf_tcp_destroy ...passed 00:05:43.289 Test: test_nvmf_tcp_poll_group_create ...passed 00:05:43.289 Test: test_nvmf_tcp_send_c2h_data ...passed 00:05:43.289 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:05:43.289 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:05:43.289 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:05:43.289 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:05:43.289 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:05:43.289 Test: test_nvmf_tcp_icreq_handle ...[2024-05-15 04:42:57.382927] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.383043] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04eec0 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.383145] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04eec0 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.383187] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.383223] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04eec0 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.383312] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:05:43.289 [2024-05-15 04:42:57.383423] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.383483] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04eec0 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.383520] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:05:43.289 [2024-05-15 04:42:57.383556] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04eec0 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.383584] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.383624] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04eec0 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.383653] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.383700] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04eec0 is same with the state(5) to be set 00:05:43.289 passed 00:05:43.289 Test: test_nvmf_tcp_check_xfer_type ...passed 00:05:43.289 Test: test_nvmf_tcp_invalid_sgl ...passed 00:05:43.289 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-05-15 04:42:57.385471] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:05:43.289 [2024-05-15 04:42:57.385584] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.385639] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04eec0 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.385782] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffe5e04fc20 00:05:43.289 [2024-05-15 04:42:57.385951] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.386048] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04f380 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.386119] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffe5e04f380 00:05:43.289 [2024-05-15 04:42:57.386172] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.386245] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04f380 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.386294] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:05:43.289 [2024-05-15 04:42:57.386356] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.386432] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04f380 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.386512] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:05:43.289 [2024-05-15 04:42:57.386572] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.386632] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04f380 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.386687] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.386766] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04f380 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.386864] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.386911] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04f380 is same with the state(5) to be set 00:05:43.289 passed 00:05:43.289 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-05-15 04:42:57.386990] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.387044] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04f380 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.387122] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.387170] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04f380 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.387258] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.387313] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04f380 is same with the state(5) to be set 00:05:43.289 [2024-05-15 04:42:57.387378] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:43.289 [2024-05-15 04:42:57.387425] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe5e04f380 is same with the state(5) to be set 00:05:43.289 passed 00:05:43.289 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:05:43.289 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed 00:05:43.289 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:05:43.289 00:05:43.289 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.289 suites 1 1 n/a 0 0 00:05:43.289 tests 17 17 17 0 0 00:05:43.289 asserts 222 222 222 0 n/a 00:05:43.289 00:05:43.289 Elapsed time = 0.210 seconds 00:05:43.289 [2024-05-15 04:42:57.416785] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:05:43.289 [2024-05-15 04:42:57.416888] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:05:43.289 [2024-05-15 04:42:57.417160] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:05:43.289 [2024-05-15 04:42:57.417195] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:05:43.289 [2024-05-15 04:42:57.417348] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:05:43.289 [2024-05-15 04:42:57.417379] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:05:43.289 04:42:57 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:05:43.290 00:05:43.290 00:05:43.290 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.290 http://cunit.sourceforge.net/ 00:05:43.290 00:05:43.290 00:05:43.290 Suite: nvmf 00:05:43.549 Test: test_nvmf_tgt_create_poll_group ...passed 00:05:43.549 00:05:43.549 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.549 suites 1 1 n/a 0 0 00:05:43.549 tests 1 1 1 0 0 00:05:43.549 asserts 17 17 17 0 n/a 00:05:43.549 00:05:43.549 Elapsed time = 0.030 seconds 00:05:43.549 ************************************ 00:05:43.549 END TEST unittest_nvmf 00:05:43.549 ************************************ 00:05:43.549 00:05:43.549 real 0m0.568s 00:05:43.549 user 0m0.229s 00:05:43.549 sys 0m0.339s 00:05:43.549 04:42:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.549 04:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.549 04:42:57 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:43.549 04:42:57 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:43.549 04:42:57 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:05:43.549 04:42:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.549 04:42:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.549 04:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.549 ************************************ 00:05:43.549 START TEST unittest_nvmf_rdma 00:05:43.549 ************************************ 00:05:43.549 04:42:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:05:43.549 00:05:43.549 00:05:43.549 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.549 http://cunit.sourceforge.net/ 00:05:43.549 00:05:43.549 00:05:43.549 Suite: nvmf 00:05:43.549 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-05-15 04:42:57.693619] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:05:43.549 passed 00:05:43.549 Test: test_spdk_nvmf_rdma_request_process ...[2024-05-15 04:42:57.693967] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:05:43.549 [2024-05-15 04:42:57.694015] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:05:43.549 passed 00:05:43.549 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:05:43.549 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:05:43.549 Test: test_nvmf_rdma_opts_init ...passed 00:05:43.549 Test: test_nvmf_rdma_request_free_data ...passed 00:05:43.549 Test: test_nvmf_rdma_update_ibv_state ...passed 00:05:43.549 Test: test_nvmf_rdma_resources_create ...passed 00:05:43.549 Test: test_nvmf_rdma_qpair_compare ...passed 00:05:43.549 Test: test_nvmf_rdma_resize_cq ...passed 00:05:43.549 00:05:43.549 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.549 suites 1 1 n/a 0 0 00:05:43.549 tests 10 10 10 0 0 00:05:43.549 asserts 584 584 584 0 n/a 00:05:43.549 00:05:43.549 Elapsed time = 0.000 seconds 00:05:43.549 [2024-05-15 04:42:57.694750] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:05:43.549 [2024-05-15 04:42:57.694791] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:05:43.549 [2024-05-15 04:42:57.696230] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:05:43.549 Using CQ of insufficient size may lead to CQ overrun 00:05:43.549 [2024-05-15 04:42:57.696354] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:05:43.549 [2024-05-15 04:42:57.696403] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:05:43.549 ************************************ 00:05:43.549 END TEST unittest_nvmf_rdma 00:05:43.549 ************************************ 00:05:43.549 00:05:43.549 real 0m0.038s 00:05:43.549 user 0m0.019s 00:05:43.549 sys 0m0.020s 00:05:43.549 04:42:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.549 04:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.549 04:42:57 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:43.549 04:42:57 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:05:43.549 04:42:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.549 04:42:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.549 04:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.549 ************************************ 00:05:43.549 START TEST unittest_scsi 00:05:43.549 ************************************ 00:05:43.549 04:42:57 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:05:43.549 04:42:57 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:05:43.809 00:05:43.809 00:05:43.809 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.809 http://cunit.sourceforge.net/ 00:05:43.809 00:05:43.809 00:05:43.809 Suite: dev_suite 00:05:43.809 Test: dev_destruct_null_dev ...passed 00:05:43.809 Test: dev_destruct_zero_luns ...passed 00:05:43.809 Test: dev_destruct_null_lun ...passed 00:05:43.809 Test: dev_destruct_success ...passed 00:05:43.809 Test: dev_construct_num_luns_zero ...passed 00:05:43.809 Test: dev_construct_no_lun_zero ...passed 00:05:43.809 Test: dev_construct_null_lun ...passed 00:05:43.809 Test: dev_construct_name_too_long ...passed 00:05:43.809 Test: dev_construct_success ...passed 00:05:43.809 Test: dev_construct_success_lun_zero_not_first ...passed 00:05:43.809 Test: dev_queue_mgmt_task_success ...passed 00:05:43.809 Test: dev_queue_task_success ...passed 00:05:43.809 Test: dev_stop_success ...passed 00:05:43.809 Test: dev_add_port_max_ports ...passed 00:05:43.809 Test: dev_add_port_construct_failure1 ...passed 00:05:43.809 Test: dev_add_port_construct_failure2 ...passed 00:05:43.809 Test: dev_add_port_success1 ...passed 00:05:43.809 Test: dev_add_port_success2 ...passed 00:05:43.809 Test: dev_add_port_success3 ...passed 00:05:43.809 Test: dev_find_port_by_id_num_ports_zero ...passed 00:05:43.809 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:05:43.809 Test: dev_find_port_by_id_success ...passed 00:05:43.809 Test: dev_add_lun_bdev_not_found ...passed 00:05:43.809 Test: dev_add_lun_no_free_lun_id ...passed 00:05:43.809 Test: dev_add_lun_success1 ...passed 00:05:43.809 Test: dev_add_lun_success2 ...passed 00:05:43.809 Test: dev_check_pending_tasks ...passed 00:05:43.809 Test: dev_iterate_luns ...passed 00:05:43.809 Test: dev_find_free_lun ...[2024-05-15 04:42:57.788756] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:05:43.809 [2024-05-15 04:42:57.789063] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:05:43.809 [2024-05-15 04:42:57.789101] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:05:43.810 [2024-05-15 04:42:57.789150] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:05:43.810 [2024-05-15 04:42:57.789400] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:05:43.810 [2024-05-15 04:42:57.789513] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:05:43.810 [2024-05-15 04:42:57.789620] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:05:43.810 [2024-05-15 04:42:57.790128] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:05:43.810 passed 00:05:43.810 00:05:43.810 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.810 suites 1 1 n/a 0 0 00:05:43.810 tests 29 29 29 0 0 00:05:43.810 asserts 97 97 97 0 n/a 00:05:43.810 00:05:43.810 Elapsed time = 0.000 seconds 00:05:43.810 04:42:57 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:05:43.810 00:05:43.810 00:05:43.810 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.810 http://cunit.sourceforge.net/ 00:05:43.810 00:05:43.810 00:05:43.810 Suite: lun_suite 00:05:43.810 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:05:43.810 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:05:43.810 Test: lun_task_mgmt_execute_lun_reset ...[2024-05-15 04:42:57.827300] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:05:43.810 [2024-05-15 04:42:57.827609] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:05:43.810 passed 00:05:43.810 Test: lun_task_mgmt_execute_target_reset ...passed 00:05:43.810 Test: lun_task_mgmt_execute_invalid_case ...passed 00:05:43.810 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:05:43.810 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:05:43.810 Test: lun_append_task_null_lun_not_supported ...passed 00:05:43.810 Test: lun_execute_scsi_task_pending ...passed 00:05:43.810 Test: lun_execute_scsi_task_complete ...passed 00:05:43.810 Test: lun_execute_scsi_task_resize ...passed 00:05:43.810 Test: lun_destruct_success ...passed 00:05:43.810 Test: lun_construct_null_ctx ...passed 00:05:43.810 Test: lun_construct_success ...passed 00:05:43.810 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:05:43.810 Test: lun_reset_task_suspend_scsi_task ...passed 00:05:43.810 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:05:43.810 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:05:43.810 00:05:43.810 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.810 suites 1 1 n/a 0 0 00:05:43.810 tests 18 18 18 0 0 00:05:43.810 asserts 153 153 153 0 n/a 00:05:43.810 00:05:43.810 Elapsed time = 0.000 seconds 00:05:43.810 [2024-05-15 04:42:57.827990] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:05:43.810 [2024-05-15 04:42:57.828164] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:05:43.810 04:42:57 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:05:43.810 00:05:43.810 00:05:43.810 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.810 http://cunit.sourceforge.net/ 00:05:43.810 00:05:43.810 00:05:43.810 Suite: scsi_suite 00:05:43.810 Test: scsi_init ...passed 00:05:43.810 00:05:43.810 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.810 suites 1 1 n/a 0 0 00:05:43.810 tests 1 1 1 0 0 00:05:43.810 asserts 1 1 1 0 n/a 00:05:43.810 00:05:43.810 Elapsed time = 0.000 seconds 00:05:43.810 04:42:57 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:05:43.810 00:05:43.810 00:05:43.810 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.810 http://cunit.sourceforge.net/ 00:05:43.810 00:05:43.810 00:05:43.810 Suite: translation_suite 00:05:43.810 Test: mode_select_6_test ...passed 00:05:43.810 Test: mode_select_6_test2 ...passed 00:05:43.810 Test: mode_sense_6_test ...passed 00:05:43.810 Test: mode_sense_10_test ...passed 00:05:43.810 Test: inquiry_evpd_test ...passed 00:05:43.810 Test: inquiry_standard_test ...passed 00:05:43.810 Test: inquiry_overflow_test ...passed 00:05:43.810 Test: task_complete_test ...passed 00:05:43.810 Test: lba_range_test ...passed 00:05:43.810 Test: xfer_len_test ...passed 00:05:43.810 Test: xfer_test ...[2024-05-15 04:42:57.896513] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:05:43.810 passed 00:05:43.810 Test: scsi_name_padding_test ...passed 00:05:43.810 Test: get_dif_ctx_test ...passed 00:05:43.810 Test: unmap_split_test ...passed 00:05:43.810 00:05:43.810 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.810 suites 1 1 n/a 0 0 00:05:43.810 tests 14 14 14 0 0 00:05:43.810 asserts 1200 1200 1200 0 n/a 00:05:43.810 00:05:43.810 Elapsed time = 0.000 seconds 00:05:43.810 04:42:57 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:05:43.810 00:05:43.810 00:05:43.810 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.810 http://cunit.sourceforge.net/ 00:05:43.810 00:05:43.810 00:05:43.810 Suite: reservation_suite 00:05:43.810 Test: test_reservation_register ...passed 00:05:43.810 Test: test_reservation_reserve ...[2024-05-15 04:42:57.928416] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:43.810 [2024-05-15 04:42:57.928794] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:43.810 [2024-05-15 04:42:57.928853] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:05:43.810 [2024-05-15 04:42:57.928962] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:05:43.810 passed 00:05:43.810 Test: test_reservation_preempt_non_all_regs ...passed 00:05:43.810 Test: test_reservation_preempt_all_regs ...passed 00:05:43.810 Test: test_reservation_cmds_conflict ...[2024-05-15 04:42:57.929026] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:43.810 [2024-05-15 04:42:57.929086] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:05:43.810 [2024-05-15 04:42:57.929211] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:43.810 [2024-05-15 04:42:57.929307] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:43.810 [2024-05-15 04:42:57.929364] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:05:43.810 [2024-05-15 04:42:57.929407] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:05:43.810 [2024-05-15 04:42:57.929433] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:05:43.810 passed 00:05:43.810 Test: test_scsi2_reserve_release ...passed 00:05:43.810 Test: test_pr_with_scsi2_reserve_release ...passed 00:05:43.810 00:05:43.810 [2024-05-15 04:42:57.929468] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:05:43.810 [2024-05-15 04:42:57.929495] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:05:43.810 [2024-05-15 04:42:57.929589] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:43.810 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.810 suites 1 1 n/a 0 0 00:05:43.810 tests 7 7 7 0 0 00:05:43.810 asserts 257 257 257 0 n/a 00:05:43.810 00:05:43.810 Elapsed time = 0.000 seconds 00:05:43.810 00:05:43.810 real 0m0.170s 00:05:43.810 user 0m0.088s 00:05:43.810 sys 0m0.085s 00:05:43.810 04:42:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.810 ************************************ 00:05:43.810 END TEST unittest_scsi 00:05:43.810 ************************************ 00:05:43.810 04:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.810 04:42:57 -- unit/unittest.sh@276 -- # uname -s 00:05:43.810 04:42:57 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:05:43.810 04:42:57 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:05:43.810 04:42:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.810 04:42:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.810 04:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.810 ************************************ 00:05:43.810 START TEST unittest_sock 00:05:43.810 ************************************ 00:05:43.810 04:42:57 -- common/autotest_common.sh@1104 -- # unittest_sock 00:05:43.810 04:42:57 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:05:43.810 00:05:43.810 00:05:43.810 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.810 http://cunit.sourceforge.net/ 00:05:43.810 00:05:43.810 00:05:43.810 Suite: sock 00:05:43.810 Test: posix_sock ...passed 00:05:43.810 Test: ut_sock ...passed 00:05:43.810 Test: posix_sock_group ...passed 00:05:43.810 Test: ut_sock_group ...passed 00:05:44.070 Test: posix_sock_group_fairness ...passed 00:05:44.070 Test: _posix_sock_close ...passed 00:05:44.070 Test: sock_get_default_opts ...passed 00:05:44.070 Test: ut_sock_impl_get_set_opts ...passed 00:05:44.070 Test: posix_sock_impl_get_set_opts ...passed 00:05:44.070 Test: ut_sock_map ...passed 00:05:44.070 Test: override_impl_opts ...passed 00:05:44.070 Test: ut_sock_group_get_ctx ...passed 00:05:44.070 00:05:44.070 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.070 suites 1 1 n/a 0 0 00:05:44.070 tests 12 12 12 0 0 00:05:44.070 asserts 349 349 349 0 n/a 00:05:44.070 00:05:44.070 Elapsed time = 0.000 seconds 00:05:44.070 04:42:58 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:05:44.070 00:05:44.070 00:05:44.070 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.070 http://cunit.sourceforge.net/ 00:05:44.070 00:05:44.070 00:05:44.070 Suite: posix 00:05:44.070 Test: flush ...passed 00:05:44.070 00:05:44.070 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.070 suites 1 1 n/a 0 0 00:05:44.070 tests 1 1 1 0 0 00:05:44.070 asserts 28 28 28 0 n/a 00:05:44.070 00:05:44.070 Elapsed time = 0.000 seconds 00:05:44.070 04:42:58 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:44.070 00:05:44.070 real 0m0.104s 00:05:44.070 user 0m0.041s 00:05:44.070 sys 0m0.039s 00:05:44.070 04:42:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.070 ************************************ 00:05:44.070 END TEST unittest_sock 00:05:44.070 ************************************ 00:05:44.070 04:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.070 04:42:58 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:05:44.070 04:42:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.070 04:42:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.070 04:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.070 ************************************ 00:05:44.070 START TEST unittest_thread 00:05:44.070 ************************************ 00:05:44.070 04:42:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:05:44.070 00:05:44.070 00:05:44.070 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.070 http://cunit.sourceforge.net/ 00:05:44.070 00:05:44.070 00:05:44.070 Suite: io_channel 00:05:44.070 Test: thread_alloc ...passed 00:05:44.070 Test: thread_send_msg ...passed 00:05:44.070 Test: thread_poller ...passed 00:05:44.070 Test: poller_pause ...passed 00:05:44.070 Test: thread_for_each ...passed 00:05:44.070 Test: for_each_channel_remove ...passed 00:05:44.070 Test: for_each_channel_unreg ...passed 00:05:44.070 Test: thread_name ...[2024-05-15 04:42:58.190049] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffe1f88fdf0 already registered (old:0x613000000200 new:0x6130000003c0) 00:05:44.070 passed 00:05:44.070 Test: channel ...passed 00:05:44.070 Test: channel_destroy_races ...[2024-05-15 04:42:58.192919] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x48e820 00:05:44.070 passed 00:05:44.070 Test: thread_exit_test ...[2024-05-15 04:42:58.196553] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:05:44.070 passed 00:05:44.070 Test: thread_update_stats_test ...passed 00:05:44.070 Test: nested_channel ...passed 00:05:44.070 Test: device_unregister_and_thread_exit_race ...passed 00:05:44.070 Test: cache_closest_timed_poller ...passed 00:05:44.070 Test: multi_timed_pollers_have_same_expiration ...passed 00:05:44.070 Test: io_device_lookup ...passed 00:05:44.070 Test: spdk_spin ...[2024-05-15 04:42:58.203670] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:05:44.070 [2024-05-15 04:42:58.203733] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe1f88fdd0 00:05:44.070 [2024-05-15 04:42:58.203831] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:05:44.070 passed 00:05:44.070 Test: for_each_channel_and_thread_exit_race ...[2024-05-15 04:42:58.205101] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:44.070 [2024-05-15 04:42:58.205173] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe1f88fdd0 00:05:44.070 [2024-05-15 04:42:58.205208] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:05:44.070 [2024-05-15 04:42:58.205243] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe1f88fdd0 00:05:44.070 [2024-05-15 04:42:58.205279] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:05:44.070 [2024-05-15 04:42:58.205322] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe1f88fdd0 00:05:44.070 [2024-05-15 04:42:58.205350] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:05:44.070 [2024-05-15 04:42:58.205400] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe1f88fdd0 00:05:44.070 passed 00:05:44.070 Test: for_each_thread_and_thread_exit_race ...passed 00:05:44.070 00:05:44.070 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.070 suites 1 1 n/a 0 0 00:05:44.070 tests 20 20 20 0 0 00:05:44.070 asserts 409 409 409 0 n/a 00:05:44.070 00:05:44.070 Elapsed time = 0.040 seconds 00:05:44.070 00:05:44.070 real 0m0.075s 00:05:44.070 user 0m0.049s 00:05:44.070 sys 0m0.026s 00:05:44.070 04:42:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.070 ************************************ 00:05:44.070 END TEST unittest_thread 00:05:44.070 ************************************ 00:05:44.070 04:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.071 04:42:58 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:05:44.071 04:42:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.071 04:42:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.071 04:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.071 ************************************ 00:05:44.071 START TEST unittest_iobuf 00:05:44.071 ************************************ 00:05:44.071 04:42:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:05:44.330 00:05:44.330 00:05:44.330 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.330 http://cunit.sourceforge.net/ 00:05:44.330 00:05:44.330 00:05:44.330 Suite: io_channel 00:05:44.330 Test: iobuf ...passed 00:05:44.330 Test: iobuf_cache ...passed 00:05:44.330 00:05:44.330 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.330 suites 1 1 n/a 0 0 00:05:44.330 tests 2 2 2 0 0 00:05:44.330 asserts 107 107 107 0 n/a 00:05:44.330 00:05:44.330 Elapsed time = 0.000 seconds 00:05:44.330 [2024-05-15 04:42:58.310177] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:05:44.330 [2024-05-15 04:42:58.310445] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:44.330 [2024-05-15 04:42:58.310562] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:05:44.330 [2024-05-15 04:42:58.310600] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:44.330 [2024-05-15 04:42:58.310642] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:05:44.330 [2024-05-15 04:42:58.310684] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:44.330 ************************************ 00:05:44.330 END TEST unittest_iobuf 00:05:44.330 ************************************ 00:05:44.330 00:05:44.330 real 0m0.040s 00:05:44.330 user 0m0.019s 00:05:44.330 sys 0m0.021s 00:05:44.330 04:42:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.330 04:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.330 04:42:58 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:05:44.330 04:42:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.330 04:42:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.331 04:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.331 ************************************ 00:05:44.331 START TEST unittest_util 00:05:44.331 ************************************ 00:05:44.331 04:42:58 -- common/autotest_common.sh@1104 -- # unittest_util 00:05:44.331 04:42:58 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:05:44.331 00:05:44.331 00:05:44.331 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.331 http://cunit.sourceforge.net/ 00:05:44.331 00:05:44.331 00:05:44.331 Suite: base64 00:05:44.331 Test: test_base64_get_encoded_strlen ...passed 00:05:44.331 Test: test_base64_get_decoded_len ...passed 00:05:44.331 Test: test_base64_encode ...passed 00:05:44.331 Test: test_base64_decode ...passed 00:05:44.331 Test: test_base64_urlsafe_encode ...passed 00:05:44.331 Test: test_base64_urlsafe_decode ...passed 00:05:44.331 00:05:44.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.331 suites 1 1 n/a 0 0 00:05:44.331 tests 6 6 6 0 0 00:05:44.331 asserts 112 112 112 0 n/a 00:05:44.331 00:05:44.331 Elapsed time = 0.000 seconds 00:05:44.331 04:42:58 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:05:44.331 00:05:44.331 00:05:44.331 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.331 http://cunit.sourceforge.net/ 00:05:44.331 00:05:44.331 00:05:44.331 Suite: bit_array 00:05:44.331 Test: test_1bit ...passed 00:05:44.331 Test: test_64bit ...passed 00:05:44.331 Test: test_find ...passed 00:05:44.331 Test: test_resize ...passed 00:05:44.331 Test: test_errors ...passed 00:05:44.331 Test: test_count ...passed 00:05:44.331 Test: test_mask_store_load ...passed 00:05:44.331 Test: test_mask_clear ...passed 00:05:44.331 00:05:44.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.331 suites 1 1 n/a 0 0 00:05:44.331 tests 8 8 8 0 0 00:05:44.331 asserts 5075 5075 5075 0 n/a 00:05:44.331 00:05:44.331 Elapsed time = 0.010 seconds 00:05:44.331 04:42:58 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:05:44.331 00:05:44.331 00:05:44.331 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.331 http://cunit.sourceforge.net/ 00:05:44.331 00:05:44.331 00:05:44.331 Suite: cpuset 00:05:44.331 Test: test_cpuset ...passed 00:05:44.331 Test: test_cpuset_parse ...[2024-05-15 04:42:58.450395] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:05:44.331 [2024-05-15 04:42:58.450614] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:05:44.331 [2024-05-15 04:42:58.450685] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:05:44.331 [2024-05-15 04:42:58.450786] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:05:44.331 [2024-05-15 04:42:58.450814] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:05:44.331 [2024-05-15 04:42:58.450844] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:05:44.331 passed 00:05:44.331 Test: test_cpuset_fmt ...passed 00:05:44.331 00:05:44.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.331 suites 1 1 n/a 0 0 00:05:44.331 tests 3 3 3 0 0 00:05:44.331 asserts 65 65 65 0 n/a 00:05:44.331 00:05:44.331 Elapsed time = 0.000 seconds 00:05:44.331 [2024-05-15 04:42:58.450870] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:05:44.331 [2024-05-15 04:42:58.450916] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:05:44.331 04:42:58 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:05:44.331 00:05:44.331 00:05:44.331 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.331 http://cunit.sourceforge.net/ 00:05:44.331 00:05:44.331 00:05:44.331 Suite: crc16 00:05:44.331 Test: test_crc16_t10dif ...passed 00:05:44.331 Test: test_crc16_t10dif_seed ...passed 00:05:44.331 Test: test_crc16_t10dif_copy ...passed 00:05:44.331 00:05:44.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.331 suites 1 1 n/a 0 0 00:05:44.331 tests 3 3 3 0 0 00:05:44.331 asserts 5 5 5 0 n/a 00:05:44.331 00:05:44.331 Elapsed time = 0.000 seconds 00:05:44.331 04:42:58 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:05:44.331 00:05:44.331 00:05:44.331 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.331 http://cunit.sourceforge.net/ 00:05:44.331 00:05:44.331 00:05:44.331 Suite: crc32_ieee 00:05:44.331 Test: test_crc32_ieee ...passed 00:05:44.331 00:05:44.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.331 suites 1 1 n/a 0 0 00:05:44.331 tests 1 1 1 0 0 00:05:44.331 asserts 1 1 1 0 n/a 00:05:44.331 00:05:44.331 Elapsed time = 0.000 seconds 00:05:44.331 04:42:58 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:05:44.331 00:05:44.331 00:05:44.331 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.331 http://cunit.sourceforge.net/ 00:05:44.331 00:05:44.331 00:05:44.331 Suite: crc32c 00:05:44.331 Test: test_crc32c ...passed 00:05:44.331 Test: test_crc32c_nvme ...passed 00:05:44.331 00:05:44.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.331 suites 1 1 n/a 0 0 00:05:44.331 tests 2 2 2 0 0 00:05:44.331 asserts 16 16 16 0 n/a 00:05:44.331 00:05:44.331 Elapsed time = 0.000 seconds 00:05:44.331 04:42:58 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:05:44.331 00:05:44.331 00:05:44.331 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.331 http://cunit.sourceforge.net/ 00:05:44.331 00:05:44.331 00:05:44.331 Suite: crc64 00:05:44.331 Test: test_crc64_nvme ...passed 00:05:44.331 00:05:44.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.331 suites 1 1 n/a 0 0 00:05:44.331 tests 1 1 1 0 0 00:05:44.331 asserts 4 4 4 0 n/a 00:05:44.331 00:05:44.331 Elapsed time = 0.000 seconds 00:05:44.331 04:42:58 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:05:44.593 00:05:44.593 00:05:44.593 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.593 http://cunit.sourceforge.net/ 00:05:44.593 00:05:44.593 00:05:44.593 Suite: string 00:05:44.593 Test: test_parse_ip_addr ...passed 00:05:44.593 Test: test_str_chomp ...passed 00:05:44.593 Test: test_parse_capacity ...passed 00:05:44.593 Test: test_sprintf_append_realloc ...passed 00:05:44.593 Test: test_strtol ...passed 00:05:44.593 Test: test_strtoll ...passed 00:05:44.593 Test: test_strarray ...passed 00:05:44.593 Test: test_strcpy_replace ...passed 00:05:44.593 00:05:44.593 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.593 suites 1 1 n/a 0 0 00:05:44.593 tests 8 8 8 0 0 00:05:44.593 asserts 161 161 161 0 n/a 00:05:44.593 00:05:44.593 Elapsed time = 0.000 seconds 00:05:44.593 04:42:58 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:05:44.593 00:05:44.593 00:05:44.593 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.593 http://cunit.sourceforge.net/ 00:05:44.593 00:05:44.593 00:05:44.593 Suite: dif 00:05:44.593 Test: dif_generate_and_verify_test ...[2024-05-15 04:42:58.598512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:44.593 [2024-05-15 04:42:58.598938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:44.593 [2024-05-15 04:42:58.599146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:44.593 passed 00:05:44.593 Test: dif_disable_check_test ...passed 00:05:44.593 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-05-15 04:42:58.599334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:44.593 [2024-05-15 04:42:58.599487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:44.593 [2024-05-15 04:42:58.599698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:44.593 [2024-05-15 04:42:58.600323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:44.593 [2024-05-15 04:42:58.600554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:44.593 [2024-05-15 04:42:58.600785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:44.593 [2024-05-15 04:42:58.601431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:05:44.593 [2024-05-15 04:42:58.601622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:05:44.593 [2024-05-15 04:42:58.601890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:05:44.593 [2024-05-15 04:42:58.602165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:05:44.593 [2024-05-15 04:42:58.602348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:44.593 [2024-05-15 04:42:58.602509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:44.593 [2024-05-15 04:42:58.602678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:44.593 passed 00:05:44.593 Test: dif_apptag_mask_test ...[2024-05-15 04:42:58.603027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:44.593 [2024-05-15 04:42:58.603203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:44.593 [2024-05-15 04:42:58.603354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:44.593 [2024-05-15 04:42:58.603557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:44.593 [2024-05-15 04:42:58.603750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:05:44.593 passed 00:05:44.593 Test: dif_sec_512_md_0_error_test ...passed 00:05:44.593 Test: dif_sec_4096_md_0_error_test ...passed 00:05:44.593 Test: dif_sec_4100_md_128_error_test ...[2024-05-15 04:42:58.603920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:05:44.593 [2024-05-15 04:42:58.604025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:44.593 [2024-05-15 04:42:58.604083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:44.593 [2024-05-15 04:42:58.604130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:44.593 [2024-05-15 04:42:58.604183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:05:44.593 passed 00:05:44.593 Test: dif_guard_seed_test ...passed 00:05:44.593 Test: dif_guard_value_test ...[2024-05-15 04:42:58.604218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:05:44.593 passed 00:05:44.593 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:05:44.593 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:05:44.593 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:05:44.593 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:44.593 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:44.593 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:05:44.593 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:05:44.593 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:05:44.593 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:05:44.593 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:44.593 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:05:44.593 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:05:44.593 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:05:44.593 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:05:44.593 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:05:44.593 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:05:44.593 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:44.593 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:44.593 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-15 04:42:58.633769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f94c, Actual=fd4c 00:05:44.593 [2024-05-15 04:42:58.635304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fa21, Actual=fe21 00:05:44.593 [2024-05-15 04:42:58.636854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.593 [2024-05-15 04:42:58.638371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.593 [2024-05-15 04:42:58.640060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.593 [2024-05-15 04:42:58.641595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.593 [2024-05-15 04:42:58.643120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=cb3f 00:05:44.593 [2024-05-15 04:42:58.644200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe21, Actual=2601 00:05:44.593 [2024-05-15 04:42:58.645063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab757ed, Actual=1ab753ed 00:05:44.593 [2024-05-15 04:42:58.646351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574260, Actual=38574660 00:05:44.593 [2024-05-15 04:42:58.647660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.593 [2024-05-15 04:42:58.648958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.593 [2024-05-15 04:42:58.650255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.593 [2024-05-15 04:42:58.651547] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.593 [2024-05-15 04:42:58.652984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=963d5c16 00:05:44.593 [2024-05-15 04:42:58.653854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574660, Actual=74621f4b 00:05:44.593 [2024-05-15 04:42:58.655390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:05:44.593 [2024-05-15 04:42:58.657383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:05:44.593 [2024-05-15 04:42:58.659331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.593 [2024-05-15 04:42:58.661323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.593 [2024-05-15 04:42:58.663397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:05:44.593 [2024-05-15 04:42:58.665405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:05:44.593 passed 00:05:44.593 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-05-15 04:42:58.667391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=820130b7c2f93f68 00:05:44.593 [2024-05-15 04:42:58.668950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a266, Actual=4a63247376d7f83a 00:05:44.593 [2024-05-15 04:42:58.669272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:05:44.593 [2024-05-15 04:42:58.669489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:05:44.593 [2024-05-15 04:42:58.669683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.669899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.670125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.670321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.670523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cb3f 00:05:44.594 [2024-05-15 04:42:58.670686] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2601 00:05:44.594 [2024-05-15 04:42:58.670837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:05:44.594 [2024-05-15 04:42:58.671001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:05:44.594 [2024-05-15 04:42:58.671190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.671351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.671519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.671679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.671862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=963d5c16 00:05:44.594 [2024-05-15 04:42:58.671989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=74621f4b 00:05:44.594 [2024-05-15 04:42:58.672238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:05:44.594 [2024-05-15 04:42:58.672503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:05:44.594 [2024-05-15 04:42:58.672794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.673060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.673351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.594 [2024-05-15 04:42:58.673616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.594 passed 00:05:44.594 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-05-15 04:42:58.674068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=820130b7c2f93f68 00:05:44.594 [2024-05-15 04:42:58.674327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4a63247376d7f83a 00:05:44.594 [2024-05-15 04:42:58.674526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:05:44.594 [2024-05-15 04:42:58.674770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:05:44.594 [2024-05-15 04:42:58.674966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.675169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.675385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.675592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.675809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cb3f 00:05:44.594 [2024-05-15 04:42:58.675979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2601 00:05:44.594 [2024-05-15 04:42:58.676108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:05:44.594 [2024-05-15 04:42:58.676275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:05:44.594 [2024-05-15 04:42:58.676438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.676602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 passed 00:05:44.594 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-05-15 04:42:58.676804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.676973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.677136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=963d5c16 00:05:44.594 [2024-05-15 04:42:58.677276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=74621f4b 00:05:44.594 [2024-05-15 04:42:58.677533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:05:44.594 [2024-05-15 04:42:58.677812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:05:44.594 [2024-05-15 04:42:58.678082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.678346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.678618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.594 [2024-05-15 04:42:58.678893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.594 [2024-05-15 04:42:58.679181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=820130b7c2f93f68 00:05:44.594 [2024-05-15 04:42:58.679413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4a63247376d7f83a 00:05:44.594 [2024-05-15 04:42:58.679607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:05:44.594 [2024-05-15 04:42:58.679832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:05:44.594 [2024-05-15 04:42:58.680035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.680230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.680455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.680662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.680881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cb3f 00:05:44.594 [2024-05-15 04:42:58.681044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2601 00:05:44.594 [2024-05-15 04:42:58.681180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:05:44.594 passed 00:05:44.594 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-05-15 04:42:58.681344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:05:44.594 [2024-05-15 04:42:58.681530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.681705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.681878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.682046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.682217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=963d5c16 00:05:44.594 [2024-05-15 04:42:58.682351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=74621f4b 00:05:44.594 [2024-05-15 04:42:58.682593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:05:44.594 [2024-05-15 04:42:58.682876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:05:44.594 [2024-05-15 04:42:58.683140] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.683412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.683685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.594 [2024-05-15 04:42:58.683974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.594 [2024-05-15 04:42:58.684259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=820130b7c2f93f68 00:05:44.594 [2024-05-15 04:42:58.684500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4a63247376d7f83a 00:05:44.594 [2024-05-15 04:42:58.684694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:05:44.594 [2024-05-15 04:42:58.685072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:05:44.594 [2024-05-15 04:42:58.685284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.685480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.594 [2024-05-15 04:42:58.685700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.685917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.594 [2024-05-15 04:42:58.686130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cb3f 00:05:44.594 passed 00:05:44.595 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...passed 00:05:44.595 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-05-15 04:42:58.686292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2601 00:05:44.595 [2024-05-15 04:42:58.686468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:05:44.595 [2024-05-15 04:42:58.686630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:05:44.595 [2024-05-15 04:42:58.686825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.686988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.687165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.595 [2024-05-15 04:42:58.687327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.595 [2024-05-15 04:42:58.687498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=963d5c16 00:05:44.595 [2024-05-15 04:42:58.687626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=74621f4b 00:05:44.595 [2024-05-15 04:42:58.687922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:05:44.595 [2024-05-15 04:42:58.688203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:05:44.595 [2024-05-15 04:42:58.688468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.688760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.689025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.595 [2024-05-15 04:42:58.689298] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.595 [2024-05-15 04:42:58.689579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=820130b7c2f93f68 00:05:44.595 [2024-05-15 04:42:58.689829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4a63247376d7f83a 00:05:44.595 [2024-05-15 04:42:58.690021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:05:44.595 [2024-05-15 04:42:58.690227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:05:44.595 [2024-05-15 04:42:58.690422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.690625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.690872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.595 [2024-05-15 04:42:58.691069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.595 passed 00:05:44.595 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...passed 00:05:44.595 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:05:44.595 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...[2024-05-15 04:42:58.691272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cb3f 00:05:44.595 [2024-05-15 04:42:58.691435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2601 00:05:44.595 [2024-05-15 04:42:58.691587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:05:44.595 [2024-05-15 04:42:58.691763] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:05:44.595 [2024-05-15 04:42:58.691951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.692114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.692284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.595 [2024-05-15 04:42:58.692455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.595 [2024-05-15 04:42:58.692654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=963d5c16 00:05:44.595 [2024-05-15 04:42:58.692792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=74621f4b 00:05:44.595 [2024-05-15 04:42:58.693073] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:05:44.595 [2024-05-15 04:42:58.693341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:05:44.595 [2024-05-15 04:42:58.693617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.693895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.694174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.595 [2024-05-15 04:42:58.694439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.595 [2024-05-15 04:42:58.694735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=820130b7c2f93f68 00:05:44.595 [2024-05-15 04:42:58.694976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4a63247376d7f83a 00:05:44.595 passed 00:05:44.595 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:05:44.595 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:44.595 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:05:44.595 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:05:44.595 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:44.595 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:05:44.595 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:44.595 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-15 04:42:58.726543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f94c, Actual=fd4c 00:05:44.595 [2024-05-15 04:42:58.728652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=391e, Actual=3d1e 00:05:44.595 [2024-05-15 04:42:58.730678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.732735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.734744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.595 [2024-05-15 04:42:58.736773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.595 [2024-05-15 04:42:58.737724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=cb3f 00:05:44.595 [2024-05-15 04:42:58.738663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=12d 00:05:44.595 [2024-05-15 04:42:58.739548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab757ed, Actual=1ab753ed 00:05:44.595 [2024-05-15 04:42:58.740293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=506a111b, Actual=506a151b 00:05:44.595 [2024-05-15 04:42:58.741039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.741800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.742508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.595 [2024-05-15 04:42:58.743249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.595 [2024-05-15 04:42:58.743966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=963d5c16 00:05:44.595 [2024-05-15 04:42:58.744693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=44a53805 00:05:44.595 [2024-05-15 04:42:58.746086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:05:44.595 [2024-05-15 04:42:58.747512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=e53fbbeb99f97bb, Actual=e53fbbeb99f93bb 00:05:44.595 [2024-05-15 04:42:58.748908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.750317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.751701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:05:44.595 [2024-05-15 04:42:58.753127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:05:44.595 [2024-05-15 04:42:58.754505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=820130b7c2f93f68 00:05:44.595 passed 00:05:44.595 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-15 04:42:58.756084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=1a9990a557066164 00:05:44.595 [2024-05-15 04:42:58.756409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:05:44.595 [2024-05-15 04:42:58.756652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:05:44.595 [2024-05-15 04:42:58.756898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.757133] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.595 [2024-05-15 04:42:58.757388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.596 [2024-05-15 04:42:58.757623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.596 [2024-05-15 04:42:58.757862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cb3f 00:05:44.596 [2024-05-15 04:42:58.758095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=f536 00:05:44.596 [2024-05-15 04:42:58.758271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:05:44.596 [2024-05-15 04:42:58.758449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b25c3099, Actual=b25c3499 00:05:44.596 [2024-05-15 04:42:58.758634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.758825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.758995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.596 [2024-05-15 04:42:58.759182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.596 [2024-05-15 04:42:58.759353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=963d5c16 00:05:44.596 [2024-05-15 04:42:58.759531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=a6931987 00:05:44.596 [2024-05-15 04:42:58.759886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:05:44.596 [2024-05-15 04:42:58.760215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eece6f5e86a39c21, Actual=eece6f5e86a39821 00:05:44.596 [2024-05-15 04:42:58.760546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.760899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.761240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.596 [2024-05-15 04:42:58.761576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.596 [2024-05-15 04:42:58.761935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=820130b7c2f93f68 00:05:44.596 [2024-05-15 04:42:58.762283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=fa040445683a6afe 00:05:44.596 passed 00:05:44.596 Test: dix_sec_512_md_0_error ...passed 00:05:44.596 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-05-15 04:42:58.762340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:44.596 passed 00:05:44.596 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:05:44.596 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:05:44.596 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:44.596 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:05:44.596 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:05:44.596 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:44.596 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:05:44.596 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:44.596 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-15 04:42:58.789926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f94c, Actual=fd4c 00:05:44.596 [2024-05-15 04:42:58.790907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=391e, Actual=3d1e 00:05:44.596 [2024-05-15 04:42:58.791870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.792849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.793829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.596 [2024-05-15 04:42:58.794787] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.596 [2024-05-15 04:42:58.795727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=cb3f 00:05:44.596 [2024-05-15 04:42:58.796689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=12d 00:05:44.596 [2024-05-15 04:42:58.797405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab757ed, Actual=1ab753ed 00:05:44.596 [2024-05-15 04:42:58.798133] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=506a111b, Actual=506a151b 00:05:44.596 [2024-05-15 04:42:58.798875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.799597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.800315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.596 [2024-05-15 04:42:58.801049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=45a 00:05:44.596 [2024-05-15 04:42:58.801767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=963d5c16 00:05:44.596 [2024-05-15 04:42:58.802492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=44a53805 00:05:44.596 [2024-05-15 04:42:58.803904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:05:44.596 [2024-05-15 04:42:58.805307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=e53fbbeb99f97bb, Actual=e53fbbeb99f93bb 00:05:44.596 [2024-05-15 04:42:58.806692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.808094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.809496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:05:44.596 [2024-05-15 04:42:58.810900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:05:44.596 [2024-05-15 04:42:58.812296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=820130b7c2f93f68 00:05:44.596 [2024-05-15 04:42:58.813697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=1a9990a557066164 00:05:44.596 passed 00:05:44.596 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-15 04:42:58.813998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:05:44.596 [2024-05-15 04:42:58.814264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:05:44.596 [2024-05-15 04:42:58.814498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.814738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.814990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.596 [2024-05-15 04:42:58.815218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.596 [2024-05-15 04:42:58.815455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cb3f 00:05:44.596 [2024-05-15 04:42:58.815681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=f536 00:05:44.596 [2024-05-15 04:42:58.815872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:05:44.596 [2024-05-15 04:42:58.816046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b25c3099, Actual=b25c3499 00:05:44.596 [2024-05-15 04:42:58.816240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.816426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.816598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.596 [2024-05-15 04:42:58.816799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:05:44.596 [2024-05-15 04:42:58.816972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=963d5c16 00:05:44.596 [2024-05-15 04:42:58.817157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=a6931987 00:05:44.596 [2024-05-15 04:42:58.817492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:05:44.596 [2024-05-15 04:42:58.817848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eece6f5e86a39c21, Actual=eece6f5e86a39821 00:05:44.596 [2024-05-15 04:42:58.818179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.596 [2024-05-15 04:42:58.818516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:05:44.596 passed 00:05:44.596 Test: set_md_interleave_iovs_test ...[2024-05-15 04:42:58.818853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.596 [2024-05-15 04:42:58.819188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:05:44.596 [2024-05-15 04:42:58.819514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=820130b7c2f93f68 00:05:44.596 [2024-05-15 04:42:58.819862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=fa040445683a6afe 00:05:44.856 passed 00:05:44.856 Test: set_md_interleave_iovs_split_test ...passed 00:05:44.856 Test: dif_generate_stream_pi_16_test ...passed 00:05:44.856 Test: dif_generate_stream_test ...passed 00:05:44.856 Test: set_md_interleave_iovs_alignment_test ...passed 00:05:44.856 Test: dif_generate_split_test ...[2024-05-15 04:42:58.825693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:05:44.856 passed 00:05:44.856 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:05:44.856 Test: dif_verify_split_test ...passed 00:05:44.856 Test: dif_verify_stream_multi_segments_test ...passed 00:05:44.856 Test: update_crc32c_pi_16_test ...passed 00:05:44.856 Test: update_crc32c_test ...passed 00:05:44.856 Test: dif_update_crc32c_split_test ...passed 00:05:44.856 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:05:44.856 Test: get_range_with_md_test ...passed 00:05:44.856 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:05:44.856 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:05:44.856 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:05:44.856 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:05:44.856 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:05:44.856 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:05:44.856 Test: dif_generate_and_verify_unmap_test ...passed 00:05:44.856 00:05:44.856 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.856 suites 1 1 n/a 0 0 00:05:44.856 tests 79 79 79 0 0 00:05:44.856 asserts 3584 3584 3584 0 n/a 00:05:44.856 00:05:44.856 Elapsed time = 0.260 seconds 00:05:44.856 04:42:58 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:05:44.856 00:05:44.856 00:05:44.856 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.856 http://cunit.sourceforge.net/ 00:05:44.856 00:05:44.856 00:05:44.856 Suite: iov 00:05:44.856 Test: test_single_iov ...passed 00:05:44.856 Test: test_simple_iov ...passed 00:05:44.856 Test: test_complex_iov ...passed 00:05:44.856 Test: test_iovs_to_buf ...passed 00:05:44.856 Test: test_buf_to_iovs ...passed 00:05:44.856 Test: test_memset ...passed 00:05:44.856 Test: test_iov_one ...passed 00:05:44.856 Test: test_iov_xfer ...passed 00:05:44.856 00:05:44.856 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.856 suites 1 1 n/a 0 0 00:05:44.856 tests 8 8 8 0 0 00:05:44.856 asserts 156 156 156 0 n/a 00:05:44.856 00:05:44.856 Elapsed time = 0.000 seconds 00:05:44.856 04:42:58 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:05:44.856 00:05:44.856 00:05:44.856 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.856 http://cunit.sourceforge.net/ 00:05:44.856 00:05:44.856 00:05:44.856 Suite: math 00:05:44.856 Test: test_serial_number_arithmetic ...passed 00:05:44.856 Suite: erase 00:05:44.856 Test: test_memset_s ...passed 00:05:44.856 00:05:44.856 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.856 suites 2 2 n/a 0 0 00:05:44.856 tests 2 2 2 0 0 00:05:44.856 asserts 18 18 18 0 n/a 00:05:44.856 00:05:44.856 Elapsed time = 0.000 seconds 00:05:44.856 04:42:58 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:05:44.856 00:05:44.856 00:05:44.856 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.856 http://cunit.sourceforge.net/ 00:05:44.856 00:05:44.856 00:05:44.856 Suite: pipe 00:05:44.856 Test: test_create_destroy ...passed 00:05:44.856 Test: test_write_get_buffer ...passed 00:05:44.856 Test: test_write_advance ...passed 00:05:44.856 Test: test_read_get_buffer ...passed 00:05:44.856 Test: test_read_advance ...passed 00:05:44.856 Test: test_data ...passed 00:05:44.856 00:05:44.856 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.856 suites 1 1 n/a 0 0 00:05:44.856 tests 6 6 6 0 0 00:05:44.856 asserts 250 250 250 0 n/a 00:05:44.856 00:05:44.856 Elapsed time = 0.000 seconds 00:05:44.856 04:42:58 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:05:44.856 00:05:44.856 00:05:44.856 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.856 http://cunit.sourceforge.net/ 00:05:44.856 00:05:44.856 00:05:44.856 Suite: xor 00:05:44.856 Test: test_xor_gen ...passed 00:05:44.856 00:05:44.856 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.856 suites 1 1 n/a 0 0 00:05:44.856 tests 1 1 1 0 0 00:05:44.856 asserts 17 17 17 0 n/a 00:05:44.856 00:05:44.856 Elapsed time = 0.000 seconds 00:05:44.856 00:05:44.856 real 0m0.592s 00:05:44.856 user 0m0.402s 00:05:44.856 sys 0m0.194s 00:05:44.856 ************************************ 00:05:44.856 END TEST unittest_util 00:05:44.856 ************************************ 00:05:44.856 04:42:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.856 04:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.856 04:42:59 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:44.856 04:42:59 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:05:44.856 04:42:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.856 04:42:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.856 04:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.856 ************************************ 00:05:44.856 START TEST unittest_vhost 00:05:44.856 ************************************ 00:05:44.856 04:42:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:05:44.856 00:05:44.856 00:05:44.856 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.856 http://cunit.sourceforge.net/ 00:05:44.856 00:05:44.856 00:05:44.856 Suite: vhost_suite 00:05:44.857 Test: desc_to_iov_test ...[2024-05-15 04:42:59.052232] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:05:44.857 passed 00:05:44.857 Test: create_controller_test ...[2024-05-15 04:42:59.055199] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:05:44.857 [2024-05-15 04:42:59.055287] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:05:44.857 [2024-05-15 04:42:59.055379] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:05:44.857 [2024-05-15 04:42:59.055444] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:05:44.857 [2024-05-15 04:42:59.055479] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:05:44.857 [2024-05-15 04:42:59.055672] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-05-15 04:42:59.056444] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:05:44.857 passed 00:05:44.857 Test: session_find_by_vid_test ...passed 00:05:44.857 Test: remove_controller_test ...[2024-05-15 04:42:59.057882] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:05:44.857 passed 00:05:44.857 Test: vq_avail_ring_get_test ...passed 00:05:44.857 Test: vq_packed_ring_test ...passed 00:05:44.857 Test: vhost_blk_construct_test ...passed 00:05:44.857 00:05:44.857 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.857 suites 1 1 n/a 0 0 00:05:44.857 tests 7 7 7 0 0 00:05:44.857 asserts 145 145 145 0 n/a 00:05:44.857 00:05:44.857 Elapsed time = 0.010 seconds 00:05:44.857 00:05:44.857 real 0m0.039s 00:05:44.857 user 0m0.020s 00:05:44.857 sys 0m0.018s 00:05:44.857 04:42:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.857 ************************************ 00:05:44.857 END TEST unittest_vhost 00:05:44.857 ************************************ 00:05:44.857 04:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.116 04:42:59 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:05:45.116 04:42:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.116 04:42:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.116 04:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.116 ************************************ 00:05:45.116 START TEST unittest_dma 00:05:45.116 ************************************ 00:05:45.116 04:42:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:05:45.116 00:05:45.116 00:05:45.116 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.116 http://cunit.sourceforge.net/ 00:05:45.116 00:05:45.116 00:05:45.116 Suite: dma_suite 00:05:45.116 Test: test_dma ...passed 00:05:45.116 00:05:45.116 [2024-05-15 04:42:59.141690] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:05:45.116 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.116 suites 1 1 n/a 0 0 00:05:45.116 tests 1 1 1 0 0 00:05:45.116 asserts 50 50 50 0 n/a 00:05:45.116 00:05:45.116 Elapsed time = 0.000 seconds 00:05:45.116 00:05:45.116 real 0m0.034s 00:05:45.116 user 0m0.019s 00:05:45.116 sys 0m0.016s 00:05:45.116 ************************************ 00:05:45.116 END TEST unittest_dma 00:05:45.116 ************************************ 00:05:45.116 04:42:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.116 04:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.116 04:42:59 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:05:45.116 04:42:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.116 04:42:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.116 04:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.116 ************************************ 00:05:45.116 START TEST unittest_init 00:05:45.116 ************************************ 00:05:45.116 04:42:59 -- common/autotest_common.sh@1104 -- # unittest_init 00:05:45.116 04:42:59 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:05:45.116 00:05:45.116 00:05:45.116 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.116 http://cunit.sourceforge.net/ 00:05:45.116 00:05:45.116 00:05:45.116 Suite: subsystem_suite 00:05:45.116 Test: subsystem_sort_test_depends_on_single ...passed 00:05:45.116 Test: subsystem_sort_test_depends_on_multiple ...passed 00:05:45.116 Test: subsystem_sort_test_missing_dependency ...passed 00:05:45.116 00:05:45.116 [2024-05-15 04:42:59.230393] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:05:45.116 [2024-05-15 04:42:59.230646] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:05:45.116 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.116 suites 1 1 n/a 0 0 00:05:45.116 tests 3 3 3 0 0 00:05:45.116 asserts 20 20 20 0 n/a 00:05:45.116 00:05:45.116 Elapsed time = 0.000 seconds 00:05:45.116 00:05:45.116 real 0m0.037s 00:05:45.116 user 0m0.018s 00:05:45.116 sys 0m0.020s 00:05:45.116 ************************************ 00:05:45.116 END TEST unittest_init 00:05:45.116 ************************************ 00:05:45.116 04:42:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.116 04:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.116 04:42:59 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:05:45.116 04:42:59 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:45.116 04:42:59 -- unit/unittest.sh@290 -- # hostname 00:05:45.116 04:42:59 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t centos7-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:05:45.375 geninfo: WARNING: invalid characters removed from testname! 00:06:11.922 04:43:24 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:06:14.463 04:43:28 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:16.998 04:43:30 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:18.941 04:43:32 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:21.471 04:43:35 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:23.374 04:43:37 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:25.278 04:43:39 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:27.182 04:43:41 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:27.182 04:43:41 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:27.749 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:27.749 Found 308 entries. 00:06:27.749 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:06:27.749 Writing .css and .png files. 00:06:27.749 Generating output. 00:06:27.749 Processing file include/linux/virtio_ring.h 00:06:28.008 Processing file include/spdk/util.h 00:06:28.008 Processing file include/spdk/endian.h 00:06:28.008 Processing file include/spdk/thread.h 00:06:28.008 Processing file include/spdk/nvme.h 00:06:28.008 Processing file include/spdk/histogram_data.h 00:06:28.008 Processing file include/spdk/nvme_spec.h 00:06:28.008 Processing file include/spdk/bdev_module.h 00:06:28.008 Processing file include/spdk/trace.h 00:06:28.008 Processing file include/spdk/mmio.h 00:06:28.008 Processing file include/spdk/nvmf_transport.h 00:06:28.008 Processing file include/spdk/base64.h 00:06:28.267 Processing file include/spdk_internal/rdma.h 00:06:28.267 Processing file include/spdk_internal/nvme_tcp.h 00:06:28.267 Processing file include/spdk_internal/sock.h 00:06:28.267 Processing file include/spdk_internal/utf.h 00:06:28.267 Processing file include/spdk_internal/sgl.h 00:06:28.267 Processing file include/spdk_internal/virtio.h 00:06:28.267 Processing file lib/accel/accel_sw.c 00:06:28.267 Processing file lib/accel/accel.c 00:06:28.267 Processing file lib/accel/accel_rpc.c 00:06:28.526 Processing file lib/bdev/bdev.c 00:06:28.526 Processing file lib/bdev/bdev_zone.c 00:06:28.526 Processing file lib/bdev/part.c 00:06:28.526 Processing file lib/bdev/bdev_rpc.c 00:06:28.526 Processing file lib/bdev/scsi_nvme.c 00:06:28.818 Processing file lib/blob/blob_bs_dev.c 00:06:28.818 Processing file lib/blob/blobstore.h 00:06:28.818 Processing file lib/blob/request.c 00:06:28.818 Processing file lib/blob/blobstore.c 00:06:28.818 Processing file lib/blob/zeroes.c 00:06:28.818 Processing file lib/blobfs/blobfs.c 00:06:28.818 Processing file lib/blobfs/tree.c 00:06:28.818 Processing file lib/conf/conf.c 00:06:28.818 Processing file lib/dma/dma.c 00:06:29.077 Processing file lib/env_dpdk/pci_virtio.c 00:06:29.078 Processing file lib/env_dpdk/pci_event.c 00:06:29.078 Processing file lib/env_dpdk/pci_vmd.c 00:06:29.078 Processing file lib/env_dpdk/pci_dpdk.c 00:06:29.078 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:06:29.078 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:06:29.078 Processing file lib/env_dpdk/pci_ioat.c 00:06:29.078 Processing file lib/env_dpdk/sigbus_handler.c 00:06:29.078 Processing file lib/env_dpdk/threads.c 00:06:29.078 Processing file lib/env_dpdk/pci_idxd.c 00:06:29.078 Processing file lib/env_dpdk/memory.c 00:06:29.078 Processing file lib/env_dpdk/pci.c 00:06:29.078 Processing file lib/env_dpdk/init.c 00:06:29.078 Processing file lib/env_dpdk/env.c 00:06:29.078 Processing file lib/event/app_rpc.c 00:06:29.078 Processing file lib/event/reactor.c 00:06:29.078 Processing file lib/event/app.c 00:06:29.078 Processing file lib/event/scheduler_static.c 00:06:29.078 Processing file lib/event/log_rpc.c 00:06:29.646 Processing file lib/ftl/ftl_debug.h 00:06:29.646 Processing file lib/ftl/ftl_debug.c 00:06:29.646 Processing file lib/ftl/ftl_core.c 00:06:29.646 Processing file lib/ftl/ftl_io.c 00:06:29.646 Processing file lib/ftl/ftl_core.h 00:06:29.646 Processing file lib/ftl/ftl_io.h 00:06:29.646 Processing file lib/ftl/ftl_band.h 00:06:29.646 Processing file lib/ftl/ftl_writer.c 00:06:29.646 Processing file lib/ftl/ftl_band.c 00:06:29.646 Processing file lib/ftl/ftl_trace.c 00:06:29.646 Processing file lib/ftl/ftl_writer.h 00:06:29.646 Processing file lib/ftl/ftl_sb.c 00:06:29.646 Processing file lib/ftl/ftl_p2l.c 00:06:29.646 Processing file lib/ftl/ftl_rq.c 00:06:29.646 Processing file lib/ftl/ftl_band_ops.c 00:06:29.646 Processing file lib/ftl/ftl_init.c 00:06:29.646 Processing file lib/ftl/ftl_nv_cache_io.h 00:06:29.646 Processing file lib/ftl/ftl_nv_cache.c 00:06:29.646 Processing file lib/ftl/ftl_nv_cache.h 00:06:29.646 Processing file lib/ftl/ftl_l2p_flat.c 00:06:29.646 Processing file lib/ftl/ftl_l2p.c 00:06:29.646 Processing file lib/ftl/ftl_reloc.c 00:06:29.646 Processing file lib/ftl/ftl_l2p_cache.c 00:06:29.646 Processing file lib/ftl/ftl_layout.c 00:06:29.646 Processing file lib/ftl/base/ftl_base_bdev.c 00:06:29.646 Processing file lib/ftl/base/ftl_base_dev.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:06:29.646 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:06:29.646 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:06:29.646 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:06:29.905 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:06:29.905 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:06:29.905 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:06:29.905 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:06:29.905 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:06:29.905 Processing file lib/ftl/utils/ftl_property.h 00:06:29.905 Processing file lib/ftl/utils/ftl_bitmap.c 00:06:29.905 Processing file lib/ftl/utils/ftl_conf.c 00:06:29.905 Processing file lib/ftl/utils/ftl_df.h 00:06:29.905 Processing file lib/ftl/utils/ftl_md.c 00:06:29.905 Processing file lib/ftl/utils/ftl_addr_utils.h 00:06:29.905 Processing file lib/ftl/utils/ftl_mempool.c 00:06:29.905 Processing file lib/ftl/utils/ftl_property.c 00:06:29.905 Processing file lib/idxd/idxd.c 00:06:29.905 Processing file lib/idxd/idxd_user.c 00:06:29.905 Processing file lib/idxd/idxd_internal.h 00:06:30.164 Processing file lib/init/subsystem_rpc.c 00:06:30.164 Processing file lib/init/rpc.c 00:06:30.164 Processing file lib/init/json_config.c 00:06:30.164 Processing file lib/init/subsystem.c 00:06:30.164 Processing file lib/ioat/ioat_internal.h 00:06:30.164 Processing file lib/ioat/ioat.c 00:06:30.423 Processing file lib/iscsi/init_grp.c 00:06:30.423 Processing file lib/iscsi/task.h 00:06:30.423 Processing file lib/iscsi/iscsi_subsystem.c 00:06:30.423 Processing file lib/iscsi/conn.c 00:06:30.423 Processing file lib/iscsi/tgt_node.c 00:06:30.423 Processing file lib/iscsi/iscsi_rpc.c 00:06:30.423 Processing file lib/iscsi/portal_grp.c 00:06:30.423 Processing file lib/iscsi/iscsi.h 00:06:30.423 Processing file lib/iscsi/param.c 00:06:30.423 Processing file lib/iscsi/iscsi.c 00:06:30.423 Processing file lib/iscsi/md5.c 00:06:30.423 Processing file lib/iscsi/task.c 00:06:30.682 Processing file lib/json/json_parse.c 00:06:30.682 Processing file lib/json/json_util.c 00:06:30.682 Processing file lib/json/json_write.c 00:06:30.682 Processing file lib/jsonrpc/jsonrpc_server.c 00:06:30.682 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:06:30.682 Processing file lib/jsonrpc/jsonrpc_client.c 00:06:30.682 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:06:30.682 Processing file lib/log/log_flags.c 00:06:30.682 Processing file lib/log/log_deprecated.c 00:06:30.682 Processing file lib/log/log.c 00:06:30.682 Processing file lib/lvol/lvol.c 00:06:30.941 Processing file lib/nbd/nbd.c 00:06:30.941 Processing file lib/nbd/nbd_rpc.c 00:06:30.941 Processing file lib/notify/notify_rpc.c 00:06:30.941 Processing file lib/notify/notify.c 00:06:31.510 Processing file lib/nvme/nvme_cuse.c 00:06:31.510 Processing file lib/nvme/nvme_ctrlr.c 00:06:31.510 Processing file lib/nvme/nvme_poll_group.c 00:06:31.510 Processing file lib/nvme/nvme_ns_cmd.c 00:06:31.510 Processing file lib/nvme/nvme_tcp.c 00:06:31.510 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:06:31.510 Processing file lib/nvme/nvme_discovery.c 00:06:31.510 Processing file lib/nvme/nvme_vfio_user.c 00:06:31.510 Processing file lib/nvme/nvme_fabric.c 00:06:31.510 Processing file lib/nvme/nvme_opal.c 00:06:31.510 Processing file lib/nvme/nvme_transport.c 00:06:31.510 Processing file lib/nvme/nvme_ns.c 00:06:31.510 Processing file lib/nvme/nvme_pcie_common.c 00:06:31.510 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:06:31.510 Processing file lib/nvme/nvme_io_msg.c 00:06:31.510 Processing file lib/nvme/nvme_pcie_internal.h 00:06:31.510 Processing file lib/nvme/nvme.c 00:06:31.510 Processing file lib/nvme/nvme_pcie.c 00:06:31.510 Processing file lib/nvme/nvme_internal.h 00:06:31.510 Processing file lib/nvme/nvme_zns.c 00:06:31.510 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:06:31.510 Processing file lib/nvme/nvme_rdma.c 00:06:31.510 Processing file lib/nvme/nvme_qpair.c 00:06:31.510 Processing file lib/nvme/nvme_quirks.c 00:06:31.769 Processing file lib/nvmf/nvmf.c 00:06:31.769 Processing file lib/nvmf/nvmf_internal.h 00:06:31.769 Processing file lib/nvmf/nvmf_rpc.c 00:06:31.769 Processing file lib/nvmf/ctrlr.c 00:06:31.769 Processing file lib/nvmf/subsystem.c 00:06:31.769 Processing file lib/nvmf/tcp.c 00:06:31.769 Processing file lib/nvmf/transport.c 00:06:31.769 Processing file lib/nvmf/ctrlr_bdev.c 00:06:31.769 Processing file lib/nvmf/rdma.c 00:06:31.769 Processing file lib/nvmf/ctrlr_discovery.c 00:06:31.769 Processing file lib/rdma/common.c 00:06:31.769 Processing file lib/rdma/rdma_verbs.c 00:06:32.028 Processing file lib/rpc/rpc.c 00:06:32.028 Processing file lib/scsi/port.c 00:06:32.028 Processing file lib/scsi/scsi_bdev.c 00:06:32.028 Processing file lib/scsi/lun.c 00:06:32.028 Processing file lib/scsi/scsi_pr.c 00:06:32.028 Processing file lib/scsi/task.c 00:06:32.028 Processing file lib/scsi/dev.c 00:06:32.028 Processing file lib/scsi/scsi.c 00:06:32.028 Processing file lib/scsi/scsi_rpc.c 00:06:32.028 Processing file lib/sock/sock_rpc.c 00:06:32.028 Processing file lib/sock/sock.c 00:06:32.287 Processing file lib/thread/thread.c 00:06:32.287 Processing file lib/thread/iobuf.c 00:06:32.287 Processing file lib/trace/trace_rpc.c 00:06:32.287 Processing file lib/trace/trace_flags.c 00:06:32.287 Processing file lib/trace/trace.c 00:06:32.287 Processing file lib/trace_parser/trace.cpp 00:06:32.287 Processing file lib/ut/ut.c 00:06:32.546 Processing file lib/ut_mock/mock.c 00:06:32.805 Processing file lib/util/string.c 00:06:32.806 Processing file lib/util/strerror_tls.c 00:06:32.806 Processing file lib/util/hexlify.c 00:06:32.806 Processing file lib/util/uuid.c 00:06:32.806 Processing file lib/util/fd_group.c 00:06:32.806 Processing file lib/util/crc16.c 00:06:32.806 Processing file lib/util/xor.c 00:06:32.806 Processing file lib/util/math.c 00:06:32.806 Processing file lib/util/dif.c 00:06:32.806 Processing file lib/util/bit_array.c 00:06:32.806 Processing file lib/util/fd.c 00:06:32.806 Processing file lib/util/iov.c 00:06:32.806 Processing file lib/util/crc64.c 00:06:32.806 Processing file lib/util/cpuset.c 00:06:32.806 Processing file lib/util/zipf.c 00:06:32.806 Processing file lib/util/crc32.c 00:06:32.806 Processing file lib/util/crc32c.c 00:06:32.806 Processing file lib/util/crc32_ieee.c 00:06:32.806 Processing file lib/util/file.c 00:06:32.806 Processing file lib/util/pipe.c 00:06:32.806 Processing file lib/util/base64.c 00:06:32.806 Processing file lib/vfio_user/host/vfio_user_pci.c 00:06:32.806 Processing file lib/vfio_user/host/vfio_user.c 00:06:33.065 Processing file lib/vhost/rte_vhost_user.c 00:06:33.065 Processing file lib/vhost/vhost_rpc.c 00:06:33.065 Processing file lib/vhost/vhost_blk.c 00:06:33.065 Processing file lib/vhost/vhost_scsi.c 00:06:33.065 Processing file lib/vhost/vhost.c 00:06:33.065 Processing file lib/vhost/vhost_internal.h 00:06:33.065 Processing file lib/virtio/virtio_vfio_user.c 00:06:33.065 Processing file lib/virtio/virtio.c 00:06:33.065 Processing file lib/virtio/virtio_pci.c 00:06:33.065 Processing file lib/virtio/virtio_vhost_user.c 00:06:33.325 Processing file lib/vmd/vmd.c 00:06:33.325 Processing file lib/vmd/led.c 00:06:33.325 Processing file module/accel/dsa/accel_dsa.c 00:06:33.325 Processing file module/accel/dsa/accel_dsa_rpc.c 00:06:33.325 Processing file module/accel/error/accel_error_rpc.c 00:06:33.325 Processing file module/accel/error/accel_error.c 00:06:33.325 Processing file module/accel/iaa/accel_iaa.c 00:06:33.325 Processing file module/accel/iaa/accel_iaa_rpc.c 00:06:33.325 Processing file module/accel/ioat/accel_ioat.c 00:06:33.325 Processing file module/accel/ioat/accel_ioat_rpc.c 00:06:33.583 Processing file module/bdev/aio/bdev_aio.c 00:06:33.583 Processing file module/bdev/aio/bdev_aio_rpc.c 00:06:33.583 Processing file module/bdev/daos/bdev_daos_rpc.c 00:06:33.583 Processing file module/bdev/daos/bdev_daos.c 00:06:33.583 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:06:33.583 Processing file module/bdev/delay/vbdev_delay.c 00:06:33.583 Processing file module/bdev/error/vbdev_error_rpc.c 00:06:33.583 Processing file module/bdev/error/vbdev_error.c 00:06:33.842 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:06:33.842 Processing file module/bdev/ftl/bdev_ftl.c 00:06:33.842 Processing file module/bdev/gpt/vbdev_gpt.c 00:06:33.842 Processing file module/bdev/gpt/gpt.c 00:06:33.842 Processing file module/bdev/gpt/gpt.h 00:06:33.842 Processing file module/bdev/lvol/vbdev_lvol.c 00:06:33.842 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:06:34.101 Processing file module/bdev/malloc/bdev_malloc.c 00:06:34.101 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:06:34.101 Processing file module/bdev/null/bdev_null_rpc.c 00:06:34.101 Processing file module/bdev/null/bdev_null.c 00:06:34.360 Processing file module/bdev/nvme/bdev_mdns_client.c 00:06:34.361 Processing file module/bdev/nvme/bdev_nvme.c 00:06:34.361 Processing file module/bdev/nvme/vbdev_opal.c 00:06:34.361 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:06:34.361 Processing file module/bdev/nvme/nvme_rpc.c 00:06:34.361 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:06:34.361 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:06:34.361 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:06:34.361 Processing file module/bdev/passthru/vbdev_passthru.c 00:06:34.620 Processing file module/bdev/raid/raid0.c 00:06:34.620 Processing file module/bdev/raid/bdev_raid_rpc.c 00:06:34.620 Processing file module/bdev/raid/bdev_raid.h 00:06:34.620 Processing file module/bdev/raid/concat.c 00:06:34.620 Processing file module/bdev/raid/raid1.c 00:06:34.620 Processing file module/bdev/raid/bdev_raid_sb.c 00:06:34.620 Processing file module/bdev/raid/bdev_raid.c 00:06:34.620 Processing file module/bdev/split/vbdev_split.c 00:06:34.620 Processing file module/bdev/split/vbdev_split_rpc.c 00:06:34.879 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:06:34.879 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:06:34.879 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:06:34.879 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:06:34.879 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:06:34.879 Processing file module/blob/bdev/blob_bdev.c 00:06:34.879 Processing file module/blobfs/bdev/blobfs_bdev.c 00:06:34.879 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:06:35.137 Processing file module/env_dpdk/env_dpdk_rpc.c 00:06:35.137 Processing file module/event/subsystems/accel/accel.c 00:06:35.137 Processing file module/event/subsystems/bdev/bdev.c 00:06:35.137 Processing file module/event/subsystems/iobuf/iobuf.c 00:06:35.137 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:06:35.137 Processing file module/event/subsystems/iscsi/iscsi.c 00:06:35.137 Processing file module/event/subsystems/nbd/nbd.c 00:06:35.396 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:06:35.396 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:06:35.396 Processing file module/event/subsystems/scheduler/scheduler.c 00:06:35.396 Processing file module/event/subsystems/scsi/scsi.c 00:06:35.396 Processing file module/event/subsystems/sock/sock.c 00:06:35.396 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:06:35.396 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:06:35.396 Processing file module/event/subsystems/vmd/vmd.c 00:06:35.396 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:06:35.655 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:06:35.655 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:06:35.655 Processing file module/scheduler/gscheduler/gscheduler.c 00:06:35.655 Processing file module/sock/sock_kernel.h 00:06:35.914 Processing file module/sock/posix/posix.c 00:06:35.914 Writing directory view page. 00:06:35.914 Overall coverage rate: 00:06:35.914 lines......: 38.7% (38505 of 99519 lines) 00:06:35.914 functions..: 42.4% (3526 of 8319 functions) 00:06:35.914 00:06:35.914 00:06:35.914 ===================== 00:06:35.914 All unit tests passed 00:06:35.914 ===================== 00:06:35.914 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:35.914 04:43:49 -- unit/unittest.sh@302 -- # set +x 00:06:35.914 00:06:35.914 00:06:35.914 ************************************ 00:06:35.914 END TEST unittest 00:06:35.914 ************************************ 00:06:35.914 00:06:35.914 real 2m9.399s 00:06:35.914 user 1m48.159s 00:06:35.914 sys 0m13.169s 00:06:35.914 04:43:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.914 04:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.914 04:43:49 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:06:35.914 04:43:49 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:06:35.914 04:43:49 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:06:35.914 04:43:49 -- spdk/autotest.sh@173 -- # timing_enter lib 00:06:35.914 04:43:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:35.914 04:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.914 04:43:49 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:35.914 04:43:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.914 04:43:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.914 04:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.914 ************************************ 00:06:35.914 START TEST env 00:06:35.914 ************************************ 00:06:35.914 04:43:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:35.914 * Looking for test storage... 00:06:35.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:35.914 04:43:50 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:35.914 04:43:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.914 04:43:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.914 04:43:50 -- common/autotest_common.sh@10 -- # set +x 00:06:35.914 ************************************ 00:06:35.914 START TEST env_memory 00:06:35.914 ************************************ 00:06:35.914 04:43:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:35.914 00:06:35.914 00:06:35.914 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.914 http://cunit.sourceforge.net/ 00:06:35.914 00:06:35.914 00:06:35.914 Suite: memory 00:06:35.914 Test: alloc and free memory map ...[2024-05-15 04:43:50.129785] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:36.173 passed 00:06:36.173 Test: mem map translation ...[2024-05-15 04:43:50.164329] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:36.173 [2024-05-15 04:43:50.164475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:36.173 [2024-05-15 04:43:50.164560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:36.173 [2024-05-15 04:43:50.164671] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:36.173 passed 00:06:36.173 Test: mem map registration ...[2024-05-15 04:43:50.208649] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:36.173 [2024-05-15 04:43:50.208776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:36.173 passed 00:06:36.173 Test: mem map adjacent registrations ...passed 00:06:36.173 00:06:36.173 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.173 suites 1 1 n/a 0 0 00:06:36.173 tests 4 4 4 0 0 00:06:36.173 asserts 152 152 152 0 n/a 00:06:36.173 00:06:36.173 Elapsed time = 0.160 seconds 00:06:36.173 ************************************ 00:06:36.173 END TEST env_memory 00:06:36.173 ************************************ 00:06:36.173 00:06:36.173 real 0m0.201s 00:06:36.173 user 0m0.177s 00:06:36.173 sys 0m0.024s 00:06:36.173 04:43:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.173 04:43:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.173 04:43:50 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:36.173 04:43:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.173 04:43:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.173 04:43:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.173 ************************************ 00:06:36.174 START TEST env_vtophys 00:06:36.174 ************************************ 00:06:36.174 04:43:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:36.444 EAL: lib.eal log level changed from notice to debug 00:06:36.444 EAL: Detected lcore 0 as core 0 on socket 0 00:06:36.444 EAL: Detected lcore 1 as core 0 on socket 0 00:06:36.444 EAL: Detected lcore 2 as core 0 on socket 0 00:06:36.444 EAL: Detected lcore 3 as core 0 on socket 0 00:06:36.444 EAL: Detected lcore 4 as core 0 on socket 0 00:06:36.444 EAL: Detected lcore 5 as core 0 on socket 0 00:06:36.444 EAL: Detected lcore 6 as core 0 on socket 0 00:06:36.444 EAL: Detected lcore 7 as core 0 on socket 0 00:06:36.444 EAL: Detected lcore 8 as core 0 on socket 0 00:06:36.444 EAL: Detected lcore 9 as core 0 on socket 0 00:06:36.444 EAL: Maximum logical cores by configuration: 128 00:06:36.444 EAL: Detected CPU lcores: 10 00:06:36.444 EAL: Detected NUMA nodes: 1 00:06:36.444 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:36.444 EAL: Checking presence of .so 'librte_eal.so.24' 00:06:36.444 EAL: Checking presence of .so 'librte_eal.so' 00:06:36.444 EAL: Detected static linkage of DPDK 00:06:36.444 EAL: No shared files mode enabled, IPC will be disabled 00:06:36.444 EAL: Selected IOVA mode 'PA' 00:06:36.444 EAL: Probing VFIO support... 00:06:36.444 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:36.444 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:36.444 EAL: Ask a virtual area of 0x2e000 bytes 00:06:36.444 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:36.444 EAL: Setting up physically contiguous memory... 00:06:36.444 EAL: Setting maximum number of open files to 4096 00:06:36.444 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:36.444 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:36.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:36.444 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:36.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:36.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:36.444 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:36.444 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:36.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:36.444 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:36.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:36.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:36.444 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:36.444 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:36.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:36.444 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:36.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:36.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:36.444 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:36.444 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:36.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:36.444 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:36.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:36.445 EAL: Ask a virtual area of 0x400000000 bytes 00:06:36.445 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:36.445 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:36.445 EAL: Hugepages will be freed exactly as allocated. 00:06:36.445 EAL: No shared files mode enabled, IPC is disabled 00:06:36.445 EAL: No shared files mode enabled, IPC is disabled 00:06:36.445 EAL: TSC frequency is ~2100000 KHz 00:06:36.445 EAL: Main lcore 0 is ready (tid=7f9230b52180;cpuset=[0]) 00:06:36.445 EAL: Trying to obtain current memory policy. 00:06:36.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:36.445 EAL: Restoring previous memory policy: 0 00:06:36.445 EAL: request: mp_malloc_sync 00:06:36.445 EAL: No shared files mode enabled, IPC is disabled 00:06:36.445 EAL: Heap on socket 0 was expanded by 2MB 00:06:36.445 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:36.445 EAL: Mem event callback 'spdk:(nil)' registered 00:06:36.445 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:36.445 00:06:36.445 00:06:36.445 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.445 http://cunit.sourceforge.net/ 00:06:36.445 00:06:36.445 00:06:36.445 Suite: components_suite 00:06:37.039 Test: vtophys_malloc_test ...passed 00:06:37.039 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:37.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.039 EAL: Restoring previous memory policy: 0 00:06:37.039 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.039 EAL: request: mp_malloc_sync 00:06:37.039 EAL: No shared files mode enabled, IPC is disabled 00:06:37.039 EAL: Heap on socket 0 was expanded by 4MB 00:06:37.039 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.039 EAL: request: mp_malloc_sync 00:06:37.039 EAL: No shared files mode enabled, IPC is disabled 00:06:37.039 EAL: Heap on socket 0 was shrunk by 4MB 00:06:37.039 EAL: Trying to obtain current memory policy. 00:06:37.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.040 EAL: Restoring previous memory policy: 0 00:06:37.040 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.040 EAL: request: mp_malloc_sync 00:06:37.040 EAL: No shared files mode enabled, IPC is disabled 00:06:37.040 EAL: Heap on socket 0 was expanded by 6MB 00:06:37.040 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.040 EAL: request: mp_malloc_sync 00:06:37.040 EAL: No shared files mode enabled, IPC is disabled 00:06:37.040 EAL: Heap on socket 0 was shrunk by 6MB 00:06:37.040 EAL: Trying to obtain current memory policy. 00:06:37.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.040 EAL: Restoring previous memory policy: 0 00:06:37.040 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.040 EAL: request: mp_malloc_sync 00:06:37.040 EAL: No shared files mode enabled, IPC is disabled 00:06:37.040 EAL: Heap on socket 0 was expanded by 10MB 00:06:37.040 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.040 EAL: request: mp_malloc_sync 00:06:37.040 EAL: No shared files mode enabled, IPC is disabled 00:06:37.040 EAL: Heap on socket 0 was shrunk by 10MB 00:06:37.040 EAL: Trying to obtain current memory policy. 00:06:37.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.298 EAL: Restoring previous memory policy: 0 00:06:37.298 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.298 EAL: request: mp_malloc_sync 00:06:37.298 EAL: No shared files mode enabled, IPC is disabled 00:06:37.298 EAL: Heap on socket 0 was expanded by 18MB 00:06:37.298 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.298 EAL: request: mp_malloc_sync 00:06:37.298 EAL: No shared files mode enabled, IPC is disabled 00:06:37.298 EAL: Heap on socket 0 was shrunk by 18MB 00:06:37.298 EAL: Trying to obtain current memory policy. 00:06:37.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.298 EAL: Restoring previous memory policy: 0 00:06:37.298 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.298 EAL: request: mp_malloc_sync 00:06:37.298 EAL: No shared files mode enabled, IPC is disabled 00:06:37.298 EAL: Heap on socket 0 was expanded by 34MB 00:06:37.298 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.298 EAL: request: mp_malloc_sync 00:06:37.298 EAL: No shared files mode enabled, IPC is disabled 00:06:37.298 EAL: Heap on socket 0 was shrunk by 34MB 00:06:37.298 EAL: Trying to obtain current memory policy. 00:06:37.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.298 EAL: Restoring previous memory policy: 0 00:06:37.298 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.298 EAL: request: mp_malloc_sync 00:06:37.298 EAL: No shared files mode enabled, IPC is disabled 00:06:37.298 EAL: Heap on socket 0 was expanded by 66MB 00:06:37.557 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.557 EAL: request: mp_malloc_sync 00:06:37.557 EAL: No shared files mode enabled, IPC is disabled 00:06:37.557 EAL: Heap on socket 0 was shrunk by 66MB 00:06:37.557 EAL: Trying to obtain current memory policy. 00:06:37.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.816 EAL: Restoring previous memory policy: 0 00:06:37.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.816 EAL: request: mp_malloc_sync 00:06:37.816 EAL: No shared files mode enabled, IPC is disabled 00:06:37.816 EAL: Heap on socket 0 was expanded by 130MB 00:06:37.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.075 EAL: request: mp_malloc_sync 00:06:38.075 EAL: No shared files mode enabled, IPC is disabled 00:06:38.075 EAL: Heap on socket 0 was shrunk by 130MB 00:06:38.075 EAL: Trying to obtain current memory policy. 00:06:38.075 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.334 EAL: Restoring previous memory policy: 0 00:06:38.334 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.334 EAL: request: mp_malloc_sync 00:06:38.334 EAL: No shared files mode enabled, IPC is disabled 00:06:38.334 EAL: Heap on socket 0 was expanded by 258MB 00:06:38.902 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.902 EAL: request: mp_malloc_sync 00:06:38.902 EAL: No shared files mode enabled, IPC is disabled 00:06:38.902 EAL: Heap on socket 0 was shrunk by 258MB 00:06:39.160 EAL: Trying to obtain current memory policy. 00:06:39.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.418 EAL: Restoring previous memory policy: 0 00:06:39.418 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.418 EAL: request: mp_malloc_sync 00:06:39.418 EAL: No shared files mode enabled, IPC is disabled 00:06:39.418 EAL: Heap on socket 0 was expanded by 514MB 00:06:40.353 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.353 EAL: request: mp_malloc_sync 00:06:40.353 EAL: No shared files mode enabled, IPC is disabled 00:06:40.353 EAL: Heap on socket 0 was shrunk by 514MB 00:06:41.289 EAL: Trying to obtain current memory policy. 00:06:41.289 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:41.857 EAL: Restoring previous memory policy: 0 00:06:41.857 EAL: Calling mem event callback 'spdk:(nil)' 00:06:41.857 EAL: request: mp_malloc_sync 00:06:41.857 EAL: No shared files mode enabled, IPC is disabled 00:06:41.857 EAL: Heap on socket 0 was expanded by 1026MB 00:06:43.762 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.762 EAL: request: mp_malloc_sync 00:06:43.762 EAL: No shared files mode enabled, IPC is disabled 00:06:43.762 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:45.671 passed 00:06:45.671 00:06:45.671 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.671 suites 1 1 n/a 0 0 00:06:45.671 tests 2 2 2 0 0 00:06:45.671 asserts 6713 6713 6713 0 n/a 00:06:45.671 00:06:45.671 Elapsed time = 8.860 seconds 00:06:45.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.672 EAL: request: mp_malloc_sync 00:06:45.672 EAL: No shared files mode enabled, IPC is disabled 00:06:45.672 EAL: Heap on socket 0 was shrunk by 2MB 00:06:45.672 EAL: No shared files mode enabled, IPC is disabled 00:06:45.672 EAL: No shared files mode enabled, IPC is disabled 00:06:45.672 EAL: No shared files mode enabled, IPC is disabled 00:06:45.672 ************************************ 00:06:45.672 END TEST env_vtophys 00:06:45.672 ************************************ 00:06:45.672 00:06:45.672 real 0m9.256s 00:06:45.672 user 0m7.581s 00:06:45.672 sys 0m1.462s 00:06:45.672 04:43:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.672 04:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.672 04:43:59 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:45.672 04:43:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.672 04:43:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.672 04:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.672 ************************************ 00:06:45.672 START TEST env_pci 00:06:45.672 ************************************ 00:06:45.672 04:43:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:45.672 00:06:45.672 00:06:45.672 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.672 http://cunit.sourceforge.net/ 00:06:45.672 00:06:45.672 00:06:45.672 Suite: pci 00:06:45.672 Test: pci_hook ...[2024-05-15 04:43:59.682157] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 38909 has claimed it 00:06:45.672 passed 00:06:45.672 00:06:45.672 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.672 suites 1 1 n/a 0 0 00:06:45.672 tests 1 1 1 0 0 00:06:45.672 asserts 25 25 25 0 n/a 00:06:45.672 00:06:45.672 Elapsed time = 0.000 seconds 00:06:45.672 EAL: Cannot find device (10000:00:01.0) 00:06:45.672 EAL: Failed to attach device on primary process 00:06:45.672 00:06:45.672 real 0m0.088s 00:06:45.672 user 0m0.042s 00:06:45.672 sys 0m0.046s 00:06:45.672 04:43:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.672 04:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.672 ************************************ 00:06:45.672 END TEST env_pci 00:06:45.672 ************************************ 00:06:45.672 04:43:59 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:45.672 04:43:59 -- env/env.sh@15 -- # uname 00:06:45.672 04:43:59 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:45.672 04:43:59 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:45.672 04:43:59 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:45.672 04:43:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:45.672 04:43:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.672 04:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.672 ************************************ 00:06:45.672 START TEST env_dpdk_post_init 00:06:45.672 ************************************ 00:06:45.672 04:43:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:45.931 EAL: Detected CPU lcores: 10 00:06:45.931 EAL: Detected NUMA nodes: 1 00:06:45.931 EAL: Detected static linkage of DPDK 00:06:45.931 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:45.931 EAL: Selected IOVA mode 'PA' 00:06:45.931 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:45.931 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket 0) 00:06:45.931 Starting DPDK initialization... 00:06:45.931 Starting SPDK post initialization... 00:06:45.931 SPDK NVMe probe 00:06:45.931 Attaching to 0000:00:06.0 00:06:45.931 Attached to 0000:00:06.0 00:06:45.931 Cleaning up... 00:06:45.931 00:06:45.931 real 0m0.326s 00:06:45.931 user 0m0.047s 00:06:45.931 sys 0m0.082s 00:06:45.931 ************************************ 00:06:45.931 END TEST env_dpdk_post_init 00:06:45.931 ************************************ 00:06:45.931 04:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.931 04:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:45.931 04:44:00 -- env/env.sh@26 -- # uname 00:06:45.931 04:44:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:45.931 04:44:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.931 04:44:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.931 04:44:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.931 04:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.190 ************************************ 00:06:46.190 START TEST env_mem_callbacks 00:06:46.190 ************************************ 00:06:46.190 04:44:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:46.190 EAL: Detected CPU lcores: 10 00:06:46.190 EAL: Detected NUMA nodes: 1 00:06:46.190 EAL: Detected static linkage of DPDK 00:06:46.190 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:46.190 EAL: Selected IOVA mode 'PA' 00:06:46.190 00:06:46.190 00:06:46.190 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.190 http://cunit.sourceforge.net/ 00:06:46.190 00:06:46.190 00:06:46.190 Suite: memory 00:06:46.190 Test: test ... 00:06:46.190 register 0x200000200000 2097152 00:06:46.190 malloc 3145728 00:06:46.190 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:46.190 register 0x200000400000 4194304 00:06:46.190 buf 0x2000004fffc0 len 3145728 PASSED 00:06:46.190 malloc 64 00:06:46.190 buf 0x2000004ffec0 len 64 PASSED 00:06:46.190 malloc 4194304 00:06:46.190 register 0x200000800000 6291456 00:06:46.190 buf 0x2000009fffc0 len 4194304 PASSED 00:06:46.190 free 0x2000004fffc0 3145728 00:06:46.190 free 0x2000004ffec0 64 00:06:46.190 unregister 0x200000400000 4194304 PASSED 00:06:46.190 free 0x2000009fffc0 4194304 00:06:46.190 unregister 0x200000800000 6291456 PASSED 00:06:46.190 malloc 8388608 00:06:46.190 register 0x200000400000 10485760 00:06:46.190 buf 0x2000005fffc0 len 8388608 PASSED 00:06:46.190 free 0x2000005fffc0 8388608 00:06:46.448 unregister 0x200000400000 10485760 PASSED 00:06:46.448 passed 00:06:46.448 00:06:46.448 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.448 suites 1 1 n/a 0 0 00:06:46.448 tests 1 1 1 0 0 00:06:46.448 asserts 15 15 15 0 n/a 00:06:46.448 00:06:46.448 Elapsed time = 0.070 seconds 00:06:46.448 ************************************ 00:06:46.448 END TEST env_mem_callbacks 00:06:46.448 ************************************ 00:06:46.448 00:06:46.448 real 0m0.295s 00:06:46.448 user 0m0.116s 00:06:46.449 sys 0m0.079s 00:06:46.449 04:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.449 04:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.449 00:06:46.449 real 0m10.516s 00:06:46.449 user 0m8.100s 00:06:46.449 sys 0m1.902s 00:06:46.449 04:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.449 04:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.449 ************************************ 00:06:46.449 END TEST env 00:06:46.449 ************************************ 00:06:46.449 04:44:00 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:46.449 04:44:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:46.449 04:44:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.449 04:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.449 ************************************ 00:06:46.449 START TEST rpc 00:06:46.449 ************************************ 00:06:46.449 04:44:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:46.449 * Looking for test storage... 00:06:46.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:46.449 04:44:00 -- rpc/rpc.sh@65 -- # spdk_pid=39048 00:06:46.449 04:44:00 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.449 04:44:00 -- rpc/rpc.sh@67 -- # waitforlisten 39048 00:06:46.449 04:44:00 -- common/autotest_common.sh@819 -- # '[' -z 39048 ']' 00:06:46.449 04:44:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.449 04:44:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.449 04:44:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.449 04:44:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.449 04:44:00 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:46.449 04:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.707 [2024-05-15 04:44:00.809438] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:46.707 [2024-05-15 04:44:00.809611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39048 ] 00:06:46.966 [2024-05-15 04:44:00.989286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.224 [2024-05-15 04:44:01.223174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.224 [2024-05-15 04:44:01.223390] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:47.224 [2024-05-15 04:44:01.223423] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 39048' to capture a snapshot of events at runtime. 00:06:47.224 [2024-05-15 04:44:01.223443] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid39048 for offline analysis/debug. 00:06:47.224 [2024-05-15 04:44:01.223510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.161 04:44:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.161 04:44:02 -- common/autotest_common.sh@852 -- # return 0 00:06:48.161 04:44:02 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:48.161 04:44:02 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:48.161 04:44:02 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:48.161 04:44:02 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:48.161 04:44:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.161 04:44:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.161 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.161 ************************************ 00:06:48.161 START TEST rpc_integrity 00:06:48.161 ************************************ 00:06:48.161 04:44:02 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:48.161 04:44:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:48.161 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.161 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.161 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.161 04:44:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:48.161 04:44:02 -- rpc/rpc.sh@13 -- # jq length 00:06:48.420 04:44:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:48.420 04:44:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:48.420 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.420 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.420 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.420 04:44:02 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:48.420 04:44:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:48.420 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.420 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.420 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.420 04:44:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:48.420 { 00:06:48.420 "name": "Malloc0", 00:06:48.420 "aliases": [ 00:06:48.420 "a606c178-45fd-4366-98e7-1570794e2bd1" 00:06:48.420 ], 00:06:48.420 "product_name": "Malloc disk", 00:06:48.420 "block_size": 512, 00:06:48.420 "num_blocks": 16384, 00:06:48.420 "uuid": "a606c178-45fd-4366-98e7-1570794e2bd1", 00:06:48.420 "assigned_rate_limits": { 00:06:48.420 "rw_ios_per_sec": 0, 00:06:48.420 "rw_mbytes_per_sec": 0, 00:06:48.420 "r_mbytes_per_sec": 0, 00:06:48.420 "w_mbytes_per_sec": 0 00:06:48.420 }, 00:06:48.420 "claimed": false, 00:06:48.420 "zoned": false, 00:06:48.420 "supported_io_types": { 00:06:48.420 "read": true, 00:06:48.420 "write": true, 00:06:48.420 "unmap": true, 00:06:48.420 "write_zeroes": true, 00:06:48.420 "flush": true, 00:06:48.420 "reset": true, 00:06:48.420 "compare": false, 00:06:48.420 "compare_and_write": false, 00:06:48.420 "abort": true, 00:06:48.420 "nvme_admin": false, 00:06:48.420 "nvme_io": false 00:06:48.420 }, 00:06:48.420 "memory_domains": [ 00:06:48.420 { 00:06:48.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.420 "dma_device_type": 2 00:06:48.420 } 00:06:48.420 ], 00:06:48.420 "driver_specific": {} 00:06:48.420 } 00:06:48.420 ]' 00:06:48.420 04:44:02 -- rpc/rpc.sh@17 -- # jq length 00:06:48.420 04:44:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:48.420 04:44:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:48.421 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.421 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.421 [2024-05-15 04:44:02.503846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:48.421 [2024-05-15 04:44:02.503912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.421 [2024-05-15 04:44:02.503970] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028280 00:06:48.421 [2024-05-15 04:44:02.503991] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.421 Passthru0 00:06:48.421 [2024-05-15 04:44:02.505566] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.421 [2024-05-15 04:44:02.505619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:48.421 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.421 04:44:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:48.421 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.421 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.421 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.421 04:44:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:48.421 { 00:06:48.421 "name": "Malloc0", 00:06:48.421 "aliases": [ 00:06:48.421 "a606c178-45fd-4366-98e7-1570794e2bd1" 00:06:48.421 ], 00:06:48.421 "product_name": "Malloc disk", 00:06:48.421 "block_size": 512, 00:06:48.421 "num_blocks": 16384, 00:06:48.421 "uuid": "a606c178-45fd-4366-98e7-1570794e2bd1", 00:06:48.421 "assigned_rate_limits": { 00:06:48.421 "rw_ios_per_sec": 0, 00:06:48.421 "rw_mbytes_per_sec": 0, 00:06:48.421 "r_mbytes_per_sec": 0, 00:06:48.421 "w_mbytes_per_sec": 0 00:06:48.421 }, 00:06:48.421 "claimed": true, 00:06:48.421 "claim_type": "exclusive_write", 00:06:48.421 "zoned": false, 00:06:48.421 "supported_io_types": { 00:06:48.421 "read": true, 00:06:48.421 "write": true, 00:06:48.421 "unmap": true, 00:06:48.421 "write_zeroes": true, 00:06:48.421 "flush": true, 00:06:48.421 "reset": true, 00:06:48.421 "compare": false, 00:06:48.421 "compare_and_write": false, 00:06:48.421 "abort": true, 00:06:48.421 "nvme_admin": false, 00:06:48.421 "nvme_io": false 00:06:48.421 }, 00:06:48.421 "memory_domains": [ 00:06:48.421 { 00:06:48.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.421 "dma_device_type": 2 00:06:48.421 } 00:06:48.421 ], 00:06:48.421 "driver_specific": {} 00:06:48.421 }, 00:06:48.421 { 00:06:48.421 "name": "Passthru0", 00:06:48.421 "aliases": [ 00:06:48.421 "8b7025b6-c792-50e6-83a8-dde1daab699e" 00:06:48.421 ], 00:06:48.421 "product_name": "passthru", 00:06:48.421 "block_size": 512, 00:06:48.421 "num_blocks": 16384, 00:06:48.421 "uuid": "8b7025b6-c792-50e6-83a8-dde1daab699e", 00:06:48.421 "assigned_rate_limits": { 00:06:48.421 "rw_ios_per_sec": 0, 00:06:48.421 "rw_mbytes_per_sec": 0, 00:06:48.421 "r_mbytes_per_sec": 0, 00:06:48.421 "w_mbytes_per_sec": 0 00:06:48.421 }, 00:06:48.421 "claimed": false, 00:06:48.421 "zoned": false, 00:06:48.421 "supported_io_types": { 00:06:48.421 "read": true, 00:06:48.421 "write": true, 00:06:48.421 "unmap": true, 00:06:48.421 "write_zeroes": true, 00:06:48.421 "flush": true, 00:06:48.421 "reset": true, 00:06:48.421 "compare": false, 00:06:48.421 "compare_and_write": false, 00:06:48.421 "abort": true, 00:06:48.421 "nvme_admin": false, 00:06:48.421 "nvme_io": false 00:06:48.421 }, 00:06:48.421 "memory_domains": [ 00:06:48.421 { 00:06:48.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.421 "dma_device_type": 2 00:06:48.421 } 00:06:48.421 ], 00:06:48.421 "driver_specific": { 00:06:48.421 "passthru": { 00:06:48.421 "name": "Passthru0", 00:06:48.421 "base_bdev_name": "Malloc0" 00:06:48.421 } 00:06:48.421 } 00:06:48.421 } 00:06:48.421 ]' 00:06:48.421 04:44:02 -- rpc/rpc.sh@21 -- # jq length 00:06:48.421 04:44:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:48.421 04:44:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:48.421 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.421 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.421 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.421 04:44:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:48.421 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.421 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.421 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.421 04:44:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:48.421 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.421 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.421 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.421 04:44:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:48.421 04:44:02 -- rpc/rpc.sh@26 -- # jq length 00:06:48.681 ************************************ 00:06:48.681 END TEST rpc_integrity 00:06:48.681 ************************************ 00:06:48.681 04:44:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:48.681 00:06:48.681 real 0m0.352s 00:06:48.681 user 0m0.208s 00:06:48.681 sys 0m0.043s 00:06:48.681 04:44:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.681 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.681 04:44:02 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:48.681 04:44:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.681 04:44:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.681 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.681 ************************************ 00:06:48.681 START TEST rpc_plugins 00:06:48.681 ************************************ 00:06:48.681 04:44:02 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:06:48.681 04:44:02 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:48.681 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.681 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.681 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.681 04:44:02 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:48.681 04:44:02 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:48.681 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.681 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.681 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.681 04:44:02 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:48.681 { 00:06:48.681 "name": "Malloc1", 00:06:48.681 "aliases": [ 00:06:48.681 "d35f8068-2da8-408b-a79e-f5238ade39c5" 00:06:48.681 ], 00:06:48.681 "product_name": "Malloc disk", 00:06:48.681 "block_size": 4096, 00:06:48.681 "num_blocks": 256, 00:06:48.681 "uuid": "d35f8068-2da8-408b-a79e-f5238ade39c5", 00:06:48.681 "assigned_rate_limits": { 00:06:48.681 "rw_ios_per_sec": 0, 00:06:48.681 "rw_mbytes_per_sec": 0, 00:06:48.681 "r_mbytes_per_sec": 0, 00:06:48.681 "w_mbytes_per_sec": 0 00:06:48.681 }, 00:06:48.681 "claimed": false, 00:06:48.681 "zoned": false, 00:06:48.681 "supported_io_types": { 00:06:48.681 "read": true, 00:06:48.681 "write": true, 00:06:48.681 "unmap": true, 00:06:48.681 "write_zeroes": true, 00:06:48.681 "flush": true, 00:06:48.681 "reset": true, 00:06:48.681 "compare": false, 00:06:48.681 "compare_and_write": false, 00:06:48.681 "abort": true, 00:06:48.681 "nvme_admin": false, 00:06:48.681 "nvme_io": false 00:06:48.681 }, 00:06:48.681 "memory_domains": [ 00:06:48.681 { 00:06:48.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.681 "dma_device_type": 2 00:06:48.681 } 00:06:48.681 ], 00:06:48.681 "driver_specific": {} 00:06:48.681 } 00:06:48.681 ]' 00:06:48.681 04:44:02 -- rpc/rpc.sh@32 -- # jq length 00:06:48.681 04:44:02 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:48.681 04:44:02 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:48.681 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.681 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.681 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.681 04:44:02 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:48.681 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.681 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.681 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.681 04:44:02 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:48.681 04:44:02 -- rpc/rpc.sh@36 -- # jq length 00:06:48.681 ************************************ 00:06:48.681 END TEST rpc_plugins 00:06:48.681 ************************************ 00:06:48.681 04:44:02 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:48.681 00:06:48.681 real 0m0.158s 00:06:48.681 user 0m0.110s 00:06:48.681 sys 0m0.013s 00:06:48.681 04:44:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.681 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.941 04:44:02 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:48.941 04:44:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.941 04:44:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.941 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.941 ************************************ 00:06:48.941 START TEST rpc_trace_cmd_test 00:06:48.941 ************************************ 00:06:48.941 04:44:02 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:06:48.941 04:44:02 -- rpc/rpc.sh@40 -- # local info 00:06:48.941 04:44:02 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:48.941 04:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.941 04:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.941 04:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.941 04:44:02 -- rpc/rpc.sh@42 -- # info='{ 00:06:48.941 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid39048", 00:06:48.941 "tpoint_group_mask": "0x8", 00:06:48.941 "iscsi_conn": { 00:06:48.941 "mask": "0x2", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "scsi": { 00:06:48.941 "mask": "0x4", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "bdev": { 00:06:48.941 "mask": "0x8", 00:06:48.941 "tpoint_mask": "0xffffffffffffffff" 00:06:48.941 }, 00:06:48.941 "nvmf_rdma": { 00:06:48.941 "mask": "0x10", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "nvmf_tcp": { 00:06:48.941 "mask": "0x20", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "ftl": { 00:06:48.941 "mask": "0x40", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "blobfs": { 00:06:48.941 "mask": "0x80", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "dsa": { 00:06:48.941 "mask": "0x200", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "thread": { 00:06:48.941 "mask": "0x400", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "nvme_pcie": { 00:06:48.941 "mask": "0x800", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "iaa": { 00:06:48.941 "mask": "0x1000", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "nvme_tcp": { 00:06:48.941 "mask": "0x2000", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 }, 00:06:48.941 "bdev_nvme": { 00:06:48.941 "mask": "0x4000", 00:06:48.941 "tpoint_mask": "0x0" 00:06:48.941 } 00:06:48.941 }' 00:06:48.941 04:44:02 -- rpc/rpc.sh@43 -- # jq length 00:06:48.941 04:44:03 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:48.941 04:44:03 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:48.941 04:44:03 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:48.941 04:44:03 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:48.941 04:44:03 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:48.941 04:44:03 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:48.941 04:44:03 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:48.941 04:44:03 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:49.199 04:44:03 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:49.199 00:06:49.199 real 0m0.282s 00:06:49.199 user 0m0.245s 00:06:49.199 sys 0m0.030s 00:06:49.199 ************************************ 00:06:49.199 END TEST rpc_trace_cmd_test 00:06:49.199 ************************************ 00:06:49.199 04:44:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.199 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.199 04:44:03 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:49.199 04:44:03 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:49.199 04:44:03 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:49.199 04:44:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.200 04:44:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.200 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.200 ************************************ 00:06:49.200 START TEST rpc_daemon_integrity 00:06:49.200 ************************************ 00:06:49.200 04:44:03 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:49.200 04:44:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:49.200 04:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.200 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.200 04:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.200 04:44:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:49.200 04:44:03 -- rpc/rpc.sh@13 -- # jq length 00:06:49.200 04:44:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:49.200 04:44:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:49.200 04:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.200 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.200 04:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.200 04:44:03 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:49.200 04:44:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:49.200 04:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.200 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.200 04:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.200 04:44:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:49.200 { 00:06:49.200 "name": "Malloc2", 00:06:49.200 "aliases": [ 00:06:49.200 "d0c44200-27bd-4c71-9801-847ebfc05390" 00:06:49.200 ], 00:06:49.200 "product_name": "Malloc disk", 00:06:49.200 "block_size": 512, 00:06:49.200 "num_blocks": 16384, 00:06:49.200 "uuid": "d0c44200-27bd-4c71-9801-847ebfc05390", 00:06:49.200 "assigned_rate_limits": { 00:06:49.200 "rw_ios_per_sec": 0, 00:06:49.200 "rw_mbytes_per_sec": 0, 00:06:49.200 "r_mbytes_per_sec": 0, 00:06:49.200 "w_mbytes_per_sec": 0 00:06:49.200 }, 00:06:49.200 "claimed": false, 00:06:49.200 "zoned": false, 00:06:49.200 "supported_io_types": { 00:06:49.200 "read": true, 00:06:49.200 "write": true, 00:06:49.200 "unmap": true, 00:06:49.200 "write_zeroes": true, 00:06:49.200 "flush": true, 00:06:49.200 "reset": true, 00:06:49.200 "compare": false, 00:06:49.200 "compare_and_write": false, 00:06:49.200 "abort": true, 00:06:49.200 "nvme_admin": false, 00:06:49.200 "nvme_io": false 00:06:49.200 }, 00:06:49.200 "memory_domains": [ 00:06:49.200 { 00:06:49.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.200 "dma_device_type": 2 00:06:49.200 } 00:06:49.200 ], 00:06:49.200 "driver_specific": {} 00:06:49.200 } 00:06:49.200 ]' 00:06:49.200 04:44:03 -- rpc/rpc.sh@17 -- # jq length 00:06:49.459 04:44:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:49.459 04:44:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:49.459 04:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.459 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.459 [2024-05-15 04:44:03.435745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:49.459 [2024-05-15 04:44:03.435800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.459 [2024-05-15 04:44:03.435850] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002a680 00:06:49.459 [2024-05-15 04:44:03.435876] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.459 Passthru0 00:06:49.459 [2024-05-15 04:44:03.437815] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.459 [2024-05-15 04:44:03.437879] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:49.459 04:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.459 04:44:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:49.459 04:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.459 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.459 04:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.459 04:44:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:49.459 { 00:06:49.459 "name": "Malloc2", 00:06:49.459 "aliases": [ 00:06:49.459 "d0c44200-27bd-4c71-9801-847ebfc05390" 00:06:49.459 ], 00:06:49.459 "product_name": "Malloc disk", 00:06:49.459 "block_size": 512, 00:06:49.459 "num_blocks": 16384, 00:06:49.459 "uuid": "d0c44200-27bd-4c71-9801-847ebfc05390", 00:06:49.459 "assigned_rate_limits": { 00:06:49.459 "rw_ios_per_sec": 0, 00:06:49.459 "rw_mbytes_per_sec": 0, 00:06:49.459 "r_mbytes_per_sec": 0, 00:06:49.459 "w_mbytes_per_sec": 0 00:06:49.459 }, 00:06:49.459 "claimed": true, 00:06:49.459 "claim_type": "exclusive_write", 00:06:49.459 "zoned": false, 00:06:49.459 "supported_io_types": { 00:06:49.459 "read": true, 00:06:49.459 "write": true, 00:06:49.459 "unmap": true, 00:06:49.459 "write_zeroes": true, 00:06:49.459 "flush": true, 00:06:49.459 "reset": true, 00:06:49.459 "compare": false, 00:06:49.459 "compare_and_write": false, 00:06:49.459 "abort": true, 00:06:49.459 "nvme_admin": false, 00:06:49.459 "nvme_io": false 00:06:49.459 }, 00:06:49.459 "memory_domains": [ 00:06:49.459 { 00:06:49.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.459 "dma_device_type": 2 00:06:49.459 } 00:06:49.459 ], 00:06:49.459 "driver_specific": {} 00:06:49.459 }, 00:06:49.459 { 00:06:49.459 "name": "Passthru0", 00:06:49.459 "aliases": [ 00:06:49.459 "e26475f4-db3a-5b7a-8270-bfa1c2f8e3f7" 00:06:49.459 ], 00:06:49.459 "product_name": "passthru", 00:06:49.459 "block_size": 512, 00:06:49.459 "num_blocks": 16384, 00:06:49.459 "uuid": "e26475f4-db3a-5b7a-8270-bfa1c2f8e3f7", 00:06:49.459 "assigned_rate_limits": { 00:06:49.459 "rw_ios_per_sec": 0, 00:06:49.459 "rw_mbytes_per_sec": 0, 00:06:49.459 "r_mbytes_per_sec": 0, 00:06:49.459 "w_mbytes_per_sec": 0 00:06:49.459 }, 00:06:49.459 "claimed": false, 00:06:49.459 "zoned": false, 00:06:49.459 "supported_io_types": { 00:06:49.459 "read": true, 00:06:49.459 "write": true, 00:06:49.459 "unmap": true, 00:06:49.459 "write_zeroes": true, 00:06:49.459 "flush": true, 00:06:49.459 "reset": true, 00:06:49.459 "compare": false, 00:06:49.459 "compare_and_write": false, 00:06:49.459 "abort": true, 00:06:49.459 "nvme_admin": false, 00:06:49.459 "nvme_io": false 00:06:49.459 }, 00:06:49.459 "memory_domains": [ 00:06:49.459 { 00:06:49.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.459 "dma_device_type": 2 00:06:49.459 } 00:06:49.459 ], 00:06:49.459 "driver_specific": { 00:06:49.459 "passthru": { 00:06:49.459 "name": "Passthru0", 00:06:49.459 "base_bdev_name": "Malloc2" 00:06:49.459 } 00:06:49.459 } 00:06:49.459 } 00:06:49.459 ]' 00:06:49.459 04:44:03 -- rpc/rpc.sh@21 -- # jq length 00:06:49.459 04:44:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:49.459 04:44:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:49.459 04:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.459 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.459 04:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.459 04:44:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:49.459 04:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.459 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.459 04:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.459 04:44:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:49.459 04:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.459 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.459 04:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.459 04:44:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:49.459 04:44:03 -- rpc/rpc.sh@26 -- # jq length 00:06:49.459 ************************************ 00:06:49.459 END TEST rpc_daemon_integrity 00:06:49.459 ************************************ 00:06:49.459 04:44:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:49.459 00:06:49.459 real 0m0.345s 00:06:49.459 user 0m0.216s 00:06:49.459 sys 0m0.040s 00:06:49.459 04:44:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.459 04:44:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.459 04:44:03 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:49.459 04:44:03 -- rpc/rpc.sh@84 -- # killprocess 39048 00:06:49.459 04:44:03 -- common/autotest_common.sh@926 -- # '[' -z 39048 ']' 00:06:49.459 04:44:03 -- common/autotest_common.sh@930 -- # kill -0 39048 00:06:49.459 04:44:03 -- common/autotest_common.sh@931 -- # uname 00:06:49.459 04:44:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.459 04:44:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 39048 00:06:49.459 killing process with pid 39048 00:06:49.459 04:44:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:49.459 04:44:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:49.459 04:44:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 39048' 00:06:49.459 04:44:03 -- common/autotest_common.sh@945 -- # kill 39048 00:06:49.459 04:44:03 -- common/autotest_common.sh@950 -- # wait 39048 00:06:52.745 ************************************ 00:06:52.745 END TEST rpc 00:06:52.745 ************************************ 00:06:52.745 00:06:52.745 real 0m5.728s 00:06:52.745 user 0m6.341s 00:06:52.745 sys 0m0.965s 00:06:52.745 04:44:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.745 04:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.745 04:44:06 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:52.745 04:44:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.745 04:44:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.745 04:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.745 ************************************ 00:06:52.745 START TEST rpc_client 00:06:52.745 ************************************ 00:06:52.745 04:44:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:52.745 * Looking for test storage... 00:06:52.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:52.745 04:44:06 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:52.745 OK 00:06:52.745 04:44:06 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:52.745 ************************************ 00:06:52.745 END TEST rpc_client 00:06:52.745 ************************************ 00:06:52.745 00:06:52.745 real 0m0.243s 00:06:52.745 user 0m0.063s 00:06:52.745 sys 0m0.096s 00:06:52.745 04:44:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.745 04:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.745 04:44:06 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:52.745 04:44:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.745 04:44:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.745 04:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.745 ************************************ 00:06:52.745 START TEST json_config 00:06:52.745 ************************************ 00:06:52.745 04:44:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:52.745 04:44:06 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:52.745 04:44:06 -- nvmf/common.sh@7 -- # uname -s 00:06:52.745 04:44:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.745 04:44:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.745 04:44:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.745 04:44:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.745 04:44:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.745 04:44:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.745 04:44:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.745 04:44:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.745 04:44:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.745 04:44:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.745 INFO: JSON configuration test init 00:06:52.745 04:44:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d34e5db0-730e-4347-b982-ebcdd012eaa7 00:06:52.745 04:44:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=d34e5db0-730e-4347-b982-ebcdd012eaa7 00:06:52.745 04:44:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.745 04:44:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.745 04:44:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:52.745 04:44:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.745 04:44:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.745 04:44:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.745 04:44:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.745 04:44:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:52.745 04:44:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:52.745 04:44:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:52.745 04:44:06 -- paths/export.sh@5 -- # export PATH 00:06:52.745 04:44:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:52.745 04:44:06 -- nvmf/common.sh@46 -- # : 0 00:06:52.745 04:44:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:52.745 04:44:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:52.745 04:44:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:52.745 04:44:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.745 04:44:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.745 04:44:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:52.745 04:44:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:52.745 04:44:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:52.745 04:44:06 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:52.745 04:44:06 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:52.745 04:44:06 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:52.745 04:44:06 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:52.745 04:44:06 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:06:52.745 04:44:06 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:52.745 04:44:06 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:06:52.745 04:44:06 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:52.745 04:44:06 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:06:52.745 04:44:06 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:52.745 04:44:06 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:06:52.745 04:44:06 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:52.745 04:44:06 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:52.745 04:44:06 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:52.745 04:44:06 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:52.745 04:44:06 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:52.745 04:44:06 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:52.745 04:44:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:52.745 04:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.745 04:44:06 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:52.745 04:44:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:52.745 04:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.745 04:44:06 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:52.745 04:44:06 -- json_config/json_config.sh@98 -- # local app=target 00:06:52.745 04:44:06 -- json_config/json_config.sh@99 -- # shift 00:06:52.745 04:44:06 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:52.745 04:44:06 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:52.745 04:44:06 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:52.745 04:44:06 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:52.746 04:44:06 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:52.746 04:44:06 -- json_config/json_config.sh@111 -- # app_pid[$app]=39379 00:06:52.746 Waiting for target to run... 00:06:52.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:52.746 04:44:06 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:52.746 04:44:06 -- json_config/json_config.sh@114 -- # waitforlisten 39379 /var/tmp/spdk_tgt.sock 00:06:52.746 04:44:06 -- common/autotest_common.sh@819 -- # '[' -z 39379 ']' 00:06:52.746 04:44:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:52.746 04:44:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.746 04:44:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:52.746 04:44:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.746 04:44:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.746 04:44:06 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:52.746 [2024-05-15 04:44:06.867560] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:52.746 [2024-05-15 04:44:06.867965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39379 ] 00:06:53.341 [2024-05-15 04:44:07.453631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.601 [2024-05-15 04:44:07.666668] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.601 [2024-05-15 04:44:07.667095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.601 00:06:53.601 04:44:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:53.601 04:44:07 -- common/autotest_common.sh@852 -- # return 0 00:06:53.601 04:44:07 -- json_config/json_config.sh@115 -- # echo '' 00:06:53.601 04:44:07 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:53.601 04:44:07 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:53.601 04:44:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:53.601 04:44:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.601 04:44:07 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:53.601 04:44:07 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:53.601 04:44:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:53.601 04:44:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.601 04:44:07 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:53.601 04:44:07 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:53.601 04:44:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:54.539 04:44:08 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:54.539 04:44:08 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:54.539 04:44:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:54.539 04:44:08 -- common/autotest_common.sh@10 -- # set +x 00:06:54.539 04:44:08 -- json_config/json_config.sh@48 -- # local ret=0 00:06:54.539 04:44:08 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:06:54.539 04:44:08 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:54.539 04:44:08 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:06:54.539 04:44:08 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:54.539 04:44:08 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:54.539 04:44:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:54.798 04:44:08 -- json_config/json_config.sh@51 -- # local get_types 00:06:54.798 04:44:08 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:54.798 04:44:08 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:54.798 04:44:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:54.798 04:44:08 -- common/autotest_common.sh@10 -- # set +x 00:06:54.798 04:44:08 -- json_config/json_config.sh@58 -- # return 0 00:06:54.798 04:44:08 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:06:54.798 04:44:08 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:06:54.798 04:44:08 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:06:54.798 04:44:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:54.798 04:44:08 -- common/autotest_common.sh@10 -- # set +x 00:06:54.798 04:44:08 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:06:54.798 04:44:08 -- json_config/json_config.sh@160 -- # local expected_notifications 00:06:54.798 04:44:08 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:06:54.798 04:44:08 -- json_config/json_config.sh@164 -- # get_notifications 00:06:54.798 04:44:08 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:06:54.798 04:44:08 -- json_config/json_config.sh@64 -- # IFS=: 00:06:54.798 04:44:08 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:54.798 04:44:08 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:54.798 04:44:08 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:06:54.798 04:44:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:55.058 04:44:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:06:55.058 04:44:09 -- json_config/json_config.sh@64 -- # IFS=: 00:06:55.058 04:44:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:55.058 04:44:09 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:06:55.058 04:44:09 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:06:55.058 04:44:09 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:06:55.058 04:44:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:06:55.317 Nvme0n1p0 Nvme0n1p1 00:06:55.317 04:44:09 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:06:55.317 04:44:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:06:55.317 [2024-05-15 04:44:09.475612] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:55.317 [2024-05-15 04:44:09.475680] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:55.317 00:06:55.317 04:44:09 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:06:55.317 04:44:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:06:55.576 Malloc3 00:06:55.576 04:44:09 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:55.576 04:44:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:55.835 [2024-05-15 04:44:09.848658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:55.835 [2024-05-15 04:44:09.848889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.835 [2024-05-15 04:44:09.848941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000036080 00:06:55.835 [2024-05-15 04:44:09.848968] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.835 [2024-05-15 04:44:09.850521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.835 [2024-05-15 04:44:09.850572] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:55.835 PTBdevFromMalloc3 00:06:55.835 04:44:09 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:06:55.835 04:44:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:06:56.094 Null0 00:06:56.094 04:44:10 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:06:56.094 04:44:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:06:56.094 Malloc0 00:06:56.094 04:44:10 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:06:56.094 04:44:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:06:56.353 Malloc1 00:06:56.353 04:44:10 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:06:56.353 04:44:10 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:06:56.612 102400+0 records in 00:06:56.612 102400+0 records out 00:06:56.612 104857600 bytes (105 MB) copied, 0.397464 s, 264 MB/s 00:06:56.612 04:44:10 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:06:56.612 04:44:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:06:56.871 aio_disk 00:06:56.871 04:44:11 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:06:56.871 04:44:11 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:56.871 04:44:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:57.130 c8467749-eaa7-42c2-b14c-6c1a69dae519 00:06:57.130 04:44:11 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:06:57.130 04:44:11 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:06:57.130 04:44:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:06:57.389 04:44:11 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:06:57.389 04:44:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:06:57.647 04:44:11 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:57.647 04:44:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:57.647 04:44:11 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:57.648 04:44:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:57.907 04:44:11 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:06:57.907 04:44:11 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:06:57.907 04:44:11 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:abcaa533-51d4-4d2b-8ca9-d5243232b6df bdev_register:7eb7d3b5-7253-404f-b08b-494e50f51632 bdev_register:e22824a5-c0d5-4377-8f92-2d3c428cc719 bdev_register:d9cde6e9-2f1c-4bd8-80ce-e83a27ac2286 00:06:57.907 04:44:11 -- json_config/json_config.sh@70 -- # local events_to_check 00:06:57.907 04:44:11 -- json_config/json_config.sh@71 -- # local recorded_events 00:06:57.907 04:44:11 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:57.907 04:44:11 -- json_config/json_config.sh@74 -- # sort 00:06:57.907 04:44:11 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:abcaa533-51d4-4d2b-8ca9-d5243232b6df bdev_register:7eb7d3b5-7253-404f-b08b-494e50f51632 bdev_register:e22824a5-c0d5-4377-8f92-2d3c428cc719 bdev_register:d9cde6e9-2f1c-4bd8-80ce-e83a27ac2286 00:06:57.907 04:44:11 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:06:57.907 04:44:11 -- json_config/json_config.sh@75 -- # get_notifications 00:06:57.907 04:44:11 -- json_config/json_config.sh@75 -- # sort 00:06:57.907 04:44:11 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:06:57.907 04:44:11 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:11 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:11 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:06:57.907 04:44:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:57.907 04:44:11 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:abcaa533-51d4-4d2b-8ca9-d5243232b6df 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:7eb7d3b5-7253-404f-b08b-494e50f51632 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:e22824a5-c0d5-4377-8f92-2d3c428cc719 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@65 -- # echo bdev_register:d9cde6e9-2f1c-4bd8-80ce-e83a27ac2286 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # IFS=: 00:06:57.907 04:44:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:57.907 04:44:12 -- json_config/json_config.sh@77 -- # [[ bdev_register:7eb7d3b5-7253-404f-b08b-494e50f51632 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:abcaa533-51d4-4d2b-8ca9-d5243232b6df bdev_register:aio_disk bdev_register:d9cde6e9-2f1c-4bd8-80ce-e83a27ac2286 bdev_register:e22824a5-c0d5-4377-8f92-2d3c428cc719 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\e\b\7\d\3\b\5\-\7\2\5\3\-\4\0\4\f\-\b\0\8\b\-\4\9\4\e\5\0\f\5\1\6\3\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\b\c\a\a\5\3\3\-\5\1\d\4\-\4\d\2\b\-\8\c\a\9\-\d\5\2\4\3\2\3\2\b\6\d\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\9\c\d\e\6\e\9\-\2\f\1\c\-\4\b\d\8\-\8\0\c\e\-\e\8\3\a\2\7\a\c\2\2\8\6\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\2\2\8\2\4\a\5\-\c\0\d\5\-\4\3\7\7\-\8\f\9\2\-\2\d\3\c\4\2\8\c\c\7\1\9 ]] 00:06:57.907 04:44:12 -- json_config/json_config.sh@89 -- # cat 00:06:57.907 04:44:12 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:7eb7d3b5-7253-404f-b08b-494e50f51632 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:abcaa533-51d4-4d2b-8ca9-d5243232b6df bdev_register:aio_disk bdev_register:d9cde6e9-2f1c-4bd8-80ce-e83a27ac2286 bdev_register:e22824a5-c0d5-4377-8f92-2d3c428cc719 00:06:57.907 Expected events matched: 00:06:57.907 bdev_register:7eb7d3b5-7253-404f-b08b-494e50f51632 00:06:57.907 bdev_register:Malloc0 00:06:57.907 bdev_register:Malloc0p0 00:06:57.907 bdev_register:Malloc0p1 00:06:57.907 bdev_register:Malloc0p2 00:06:57.907 bdev_register:Malloc1 00:06:57.907 bdev_register:Malloc3 00:06:57.907 bdev_register:Null0 00:06:57.907 bdev_register:Nvme0n1 00:06:57.907 bdev_register:Nvme0n1p0 00:06:57.907 bdev_register:Nvme0n1p1 00:06:57.907 bdev_register:PTBdevFromMalloc3 00:06:57.907 bdev_register:abcaa533-51d4-4d2b-8ca9-d5243232b6df 00:06:57.907 bdev_register:aio_disk 00:06:57.908 bdev_register:d9cde6e9-2f1c-4bd8-80ce-e83a27ac2286 00:06:57.908 bdev_register:e22824a5-c0d5-4377-8f92-2d3c428cc719 00:06:57.908 04:44:12 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:06:57.908 04:44:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:57.908 04:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.182 04:44:12 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:58.182 04:44:12 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:58.182 04:44:12 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:06:58.182 04:44:12 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:58.182 04:44:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:58.182 04:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.182 04:44:12 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:58.182 04:44:12 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:58.182 04:44:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:58.182 MallocBdevForConfigChangeCheck 00:06:58.182 04:44:12 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:58.182 04:44:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:58.182 04:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.448 04:44:12 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:58.448 04:44:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:58.706 INFO: shutting down applications... 00:06:58.706 04:44:12 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:58.706 04:44:12 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:58.706 04:44:12 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:58.706 04:44:12 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:58.706 04:44:12 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:58.706 [2024-05-15 04:44:12.877986] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:58.964 Calling clear_vhost_scsi_subsystem 00:06:58.964 Calling clear_iscsi_subsystem 00:06:58.964 Calling clear_vhost_blk_subsystem 00:06:58.964 Calling clear_nbd_subsystem 00:06:58.964 Calling clear_nvmf_subsystem 00:06:58.964 Calling clear_bdev_subsystem 00:06:58.964 Calling clear_accel_subsystem 00:06:58.964 Calling clear_iobuf_subsystem 00:06:58.964 Calling clear_sock_subsystem 00:06:58.964 Calling clear_vmd_subsystem 00:06:58.964 Calling clear_scheduler_subsystem 00:06:58.964 04:44:13 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:58.965 04:44:13 -- json_config/json_config.sh@396 -- # count=100 00:06:58.965 04:44:13 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:58.965 04:44:13 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:58.965 04:44:13 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:58.965 04:44:13 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:59.223 04:44:13 -- json_config/json_config.sh@398 -- # break 00:06:59.223 04:44:13 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:59.223 04:44:13 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:59.223 04:44:13 -- json_config/json_config.sh@120 -- # local app=target 00:06:59.223 04:44:13 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:59.223 04:44:13 -- json_config/json_config.sh@124 -- # [[ -n 39379 ]] 00:06:59.223 04:44:13 -- json_config/json_config.sh@127 -- # kill -SIGINT 39379 00:06:59.223 04:44:13 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:59.223 04:44:13 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:59.223 04:44:13 -- json_config/json_config.sh@130 -- # kill -0 39379 00:06:59.223 04:44:13 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:59.790 04:44:13 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:59.790 04:44:13 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:59.790 04:44:13 -- json_config/json_config.sh@130 -- # kill -0 39379 00:06:59.790 04:44:13 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:00.358 04:44:14 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:00.358 04:44:14 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:00.358 04:44:14 -- json_config/json_config.sh@130 -- # kill -0 39379 00:07:00.358 04:44:14 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:00.927 SPDK target shutdown done 00:07:00.927 INFO: relaunching applications... 00:07:00.927 04:44:14 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:00.927 04:44:14 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:00.927 04:44:14 -- json_config/json_config.sh@130 -- # kill -0 39379 00:07:00.927 04:44:14 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:07:00.927 04:44:14 -- json_config/json_config.sh@132 -- # break 00:07:00.927 04:44:14 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:07:00.927 04:44:14 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:07:00.927 04:44:14 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:07:00.927 04:44:14 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:00.927 04:44:14 -- json_config/json_config.sh@98 -- # local app=target 00:07:00.927 04:44:14 -- json_config/json_config.sh@99 -- # shift 00:07:00.927 04:44:14 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:00.927 Waiting for target to run... 00:07:00.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:00.927 04:44:14 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:00.927 04:44:14 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:00.927 04:44:14 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:00.927 04:44:14 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:00.927 04:44:14 -- json_config/json_config.sh@111 -- # app_pid[$app]=39643 00:07:00.927 04:44:14 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:00.927 04:44:14 -- json_config/json_config.sh@114 -- # waitforlisten 39643 /var/tmp/spdk_tgt.sock 00:07:00.927 04:44:14 -- common/autotest_common.sh@819 -- # '[' -z 39643 ']' 00:07:00.927 04:44:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:00.927 04:44:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:00.927 04:44:14 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:00.927 04:44:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:00.927 04:44:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:00.927 04:44:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.927 [2024-05-15 04:44:14.995664] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:00.927 [2024-05-15 04:44:14.996054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39643 ] 00:07:01.495 [2024-05-15 04:44:15.598301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.754 [2024-05-15 04:44:15.813469] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:01.754 [2024-05-15 04:44:15.813677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.690 [2024-05-15 04:44:16.648294] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:02.690 [2024-05-15 04:44:16.648417] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:02.690 [2024-05-15 04:44:16.656254] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:02.690 [2024-05-15 04:44:16.656304] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:02.690 [2024-05-15 04:44:16.664300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:02.690 [2024-05-15 04:44:16.664347] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:02.690 [2024-05-15 04:44:16.664373] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:02.690 [2024-05-15 04:44:16.753705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:02.690 [2024-05-15 04:44:16.753832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.690 [2024-05-15 04:44:16.753883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038780 00:07:02.690 [2024-05-15 04:44:16.753910] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.690 [2024-05-15 04:44:16.754288] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.690 [2024-05-15 04:44:16.754318] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:02.948 00:07:02.948 INFO: Checking if target configuration is the same... 00:07:02.948 04:44:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:02.948 04:44:17 -- common/autotest_common.sh@852 -- # return 0 00:07:02.949 04:44:17 -- json_config/json_config.sh@115 -- # echo '' 00:07:02.949 04:44:17 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:07:02.949 04:44:17 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:02.949 04:44:17 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:02.949 04:44:17 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:07:02.949 04:44:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:02.949 + '[' 2 -ne 2 ']' 00:07:02.949 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:02.949 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:02.949 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:02.949 +++ basename /dev/fd/62 00:07:02.949 ++ mktemp /tmp/62.XXX 00:07:02.949 + tmp_file_1=/tmp/62.QUd 00:07:02.949 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:02.949 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:02.949 + tmp_file_2=/tmp/spdk_tgt_config.json.GKM 00:07:02.949 + ret=0 00:07:02.949 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:03.208 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:03.466 + diff -u /tmp/62.QUd /tmp/spdk_tgt_config.json.GKM 00:07:03.466 INFO: JSON config files are the same 00:07:03.466 + echo 'INFO: JSON config files are the same' 00:07:03.466 + rm /tmp/62.QUd /tmp/spdk_tgt_config.json.GKM 00:07:03.466 + exit 0 00:07:03.466 INFO: changing configuration and checking if this can be detected... 00:07:03.466 04:44:17 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:07:03.466 04:44:17 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:03.466 04:44:17 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:03.466 04:44:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:03.725 04:44:17 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:03.725 04:44:17 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:07:03.725 04:44:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:03.725 + '[' 2 -ne 2 ']' 00:07:03.725 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:03.725 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:03.725 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:03.725 +++ basename /dev/fd/62 00:07:03.725 ++ mktemp /tmp/62.XXX 00:07:03.725 + tmp_file_1=/tmp/62.lid 00:07:03.725 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:03.725 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:03.725 + tmp_file_2=/tmp/spdk_tgt_config.json.Rqz 00:07:03.725 + ret=0 00:07:03.725 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:03.984 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:03.984 + diff -u /tmp/62.lid /tmp/spdk_tgt_config.json.Rqz 00:07:03.984 + ret=1 00:07:03.984 + echo '=== Start of file: /tmp/62.lid ===' 00:07:03.984 + cat /tmp/62.lid 00:07:03.984 + echo '=== End of file: /tmp/62.lid ===' 00:07:03.984 + echo '' 00:07:03.984 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Rqz ===' 00:07:03.984 + cat /tmp/spdk_tgt_config.json.Rqz 00:07:03.984 + echo '=== End of file: /tmp/spdk_tgt_config.json.Rqz ===' 00:07:03.984 + echo '' 00:07:03.984 + rm /tmp/62.lid /tmp/spdk_tgt_config.json.Rqz 00:07:03.984 + exit 1 00:07:03.984 INFO: configuration change detected. 00:07:03.984 04:44:18 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:07:03.984 04:44:18 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:07:03.984 04:44:18 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:07:03.984 04:44:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:03.984 04:44:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.984 04:44:18 -- json_config/json_config.sh@360 -- # local ret=0 00:07:03.984 04:44:18 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:07:03.984 04:44:18 -- json_config/json_config.sh@370 -- # [[ -n 39643 ]] 00:07:03.984 04:44:18 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:07:03.984 04:44:18 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:07:03.984 04:44:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:03.984 04:44:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.984 04:44:18 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:07:03.984 04:44:18 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:03.984 04:44:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:03.984 04:44:18 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:03.984 04:44:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:04.243 04:44:18 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:04.243 04:44:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:04.243 04:44:18 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:04.243 04:44:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:04.504 04:44:18 -- json_config/json_config.sh@246 -- # uname -s 00:07:04.504 04:44:18 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:07:04.504 04:44:18 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:07:04.504 04:44:18 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:07:04.504 04:44:18 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:07:04.504 04:44:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:04.504 04:44:18 -- common/autotest_common.sh@10 -- # set +x 00:07:04.763 04:44:18 -- json_config/json_config.sh@376 -- # killprocess 39643 00:07:04.763 04:44:18 -- common/autotest_common.sh@926 -- # '[' -z 39643 ']' 00:07:04.763 04:44:18 -- common/autotest_common.sh@930 -- # kill -0 39643 00:07:04.763 04:44:18 -- common/autotest_common.sh@931 -- # uname 00:07:04.763 04:44:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:04.763 04:44:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 39643 00:07:04.763 04:44:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:04.763 killing process with pid 39643 00:07:04.763 04:44:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:04.763 04:44:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 39643' 00:07:04.763 04:44:18 -- common/autotest_common.sh@945 -- # kill 39643 00:07:04.763 04:44:18 -- common/autotest_common.sh@950 -- # wait 39643 00:07:05.700 04:44:19 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:05.700 04:44:19 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:07:05.700 04:44:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:05.700 04:44:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.958 INFO: Success 00:07:05.958 04:44:19 -- json_config/json_config.sh@381 -- # return 0 00:07:05.958 04:44:19 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:07:05.958 00:07:05.958 real 0m13.296s 00:07:05.958 user 0m17.011s 00:07:05.958 sys 0m2.752s 00:07:05.958 ************************************ 00:07:05.958 END TEST json_config 00:07:05.958 ************************************ 00:07:05.958 04:44:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.958 04:44:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.958 04:44:19 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:05.958 04:44:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:05.958 04:44:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.958 04:44:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.958 ************************************ 00:07:05.958 START TEST json_config_extra_key 00:07:05.958 ************************************ 00:07:05.958 04:44:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:05.958 04:44:20 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:05.958 04:44:20 -- nvmf/common.sh@7 -- # uname -s 00:07:05.958 04:44:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.958 04:44:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.958 04:44:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.958 04:44:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.958 04:44:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.958 04:44:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.958 04:44:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.958 04:44:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.958 04:44:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.958 04:44:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.958 04:44:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b3e3dd0-5b68-4fa0-8a46-2430b3f7b8bc 00:07:05.958 04:44:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=7b3e3dd0-5b68-4fa0-8a46-2430b3f7b8bc 00:07:05.958 04:44:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.958 04:44:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.958 04:44:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:05.958 04:44:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.958 04:44:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.958 04:44:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.958 04:44:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.959 04:44:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:05.959 04:44:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:05.959 04:44:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:05.959 04:44:20 -- paths/export.sh@5 -- # export PATH 00:07:05.959 04:44:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:05.959 04:44:20 -- nvmf/common.sh@46 -- # : 0 00:07:05.959 04:44:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:05.959 04:44:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:05.959 04:44:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:05.959 04:44:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.959 04:44:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.959 04:44:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:05.959 04:44:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:05.959 04:44:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:05.959 INFO: launching applications... 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@25 -- # shift 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:07:05.959 Waiting for target to run... 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=39838 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 39838 /var/tmp/spdk_tgt.sock 00:07:05.959 04:44:20 -- common/autotest_common.sh@819 -- # '[' -z 39838 ']' 00:07:05.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:05.959 04:44:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:05.959 04:44:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:05.959 04:44:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:05.959 04:44:20 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:05.959 04:44:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:05.959 04:44:20 -- common/autotest_common.sh@10 -- # set +x 00:07:06.217 [2024-05-15 04:44:20.215552] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:06.217 [2024-05-15 04:44:20.215920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39838 ] 00:07:06.785 [2024-05-15 04:44:20.837665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.044 [2024-05-15 04:44:21.047594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.044 [2024-05-15 04:44:21.048032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.429 00:07:08.429 INFO: shutting down applications... 00:07:08.429 04:44:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:08.429 04:44:22 -- common/autotest_common.sh@852 -- # return 0 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 39838 ]] 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 39838 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39838 00:07:08.429 04:44:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:08.687 04:44:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:08.687 04:44:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:08.687 04:44:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39838 00:07:08.687 04:44:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:09.254 04:44:23 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:09.254 04:44:23 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:09.254 04:44:23 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39838 00:07:09.254 04:44:23 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:09.820 04:44:23 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:09.820 04:44:23 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:09.820 04:44:23 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39838 00:07:09.820 04:44:23 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:10.386 04:44:24 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:10.386 04:44:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:10.386 04:44:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39838 00:07:10.386 04:44:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:10.644 04:44:24 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:10.644 04:44:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:10.644 04:44:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39838 00:07:10.644 04:44:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:11.211 SPDK target shutdown done 00:07:11.211 Success 00:07:11.211 04:44:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:11.211 04:44:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:11.211 04:44:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39838 00:07:11.211 04:44:25 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:07:11.211 04:44:25 -- json_config/json_config_extra_key.sh@52 -- # break 00:07:11.211 04:44:25 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:07:11.211 04:44:25 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:07:11.211 04:44:25 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:07:11.211 ************************************ 00:07:11.211 END TEST json_config_extra_key 00:07:11.211 ************************************ 00:07:11.211 00:07:11.211 real 0m5.386s 00:07:11.211 user 0m5.014s 00:07:11.211 sys 0m0.826s 00:07:11.211 04:44:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.211 04:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.211 04:44:25 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:11.211 04:44:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.211 04:44:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.211 04:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.211 ************************************ 00:07:11.211 START TEST alias_rpc 00:07:11.211 ************************************ 00:07:11.211 04:44:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:11.469 * Looking for test storage... 00:07:11.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:11.469 04:44:25 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:11.469 04:44:25 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=39981 00:07:11.469 04:44:25 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 39981 00:07:11.469 04:44:25 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:11.469 04:44:25 -- common/autotest_common.sh@819 -- # '[' -z 39981 ']' 00:07:11.469 04:44:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.469 04:44:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:11.469 04:44:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.469 04:44:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:11.469 04:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.469 [2024-05-15 04:44:25.661496] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:11.469 [2024-05-15 04:44:25.661663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39981 ] 00:07:11.727 [2024-05-15 04:44:25.817936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.985 [2024-05-15 04:44:26.048220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:11.985 [2024-05-15 04:44:26.048435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.887 04:44:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:13.887 04:44:27 -- common/autotest_common.sh@852 -- # return 0 00:07:13.887 04:44:27 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:13.887 04:44:27 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 39981 00:07:13.888 04:44:27 -- common/autotest_common.sh@926 -- # '[' -z 39981 ']' 00:07:13.888 04:44:27 -- common/autotest_common.sh@930 -- # kill -0 39981 00:07:13.888 04:44:27 -- common/autotest_common.sh@931 -- # uname 00:07:13.888 04:44:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:13.888 04:44:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 39981 00:07:13.888 killing process with pid 39981 00:07:13.888 04:44:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:13.888 04:44:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:13.888 04:44:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 39981' 00:07:13.888 04:44:27 -- common/autotest_common.sh@945 -- # kill 39981 00:07:13.888 04:44:27 -- common/autotest_common.sh@950 -- # wait 39981 00:07:16.419 00:07:16.419 real 0m5.164s 00:07:16.419 user 0m5.167s 00:07:16.419 sys 0m0.738s 00:07:16.419 04:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.419 ************************************ 00:07:16.419 END TEST alias_rpc 00:07:16.419 ************************************ 00:07:16.419 04:44:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.419 04:44:30 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:07:16.419 04:44:30 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:16.419 04:44:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.419 04:44:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.419 04:44:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.419 ************************************ 00:07:16.419 START TEST spdkcli_tcp 00:07:16.419 ************************************ 00:07:16.419 04:44:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:16.677 * Looking for test storage... 00:07:16.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:16.677 04:44:30 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:16.677 04:44:30 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:16.677 04:44:30 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:16.678 04:44:30 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:16.678 04:44:30 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:16.678 04:44:30 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:16.678 04:44:30 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:16.678 04:44:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:16.678 04:44:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.678 04:44:30 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=40111 00:07:16.678 04:44:30 -- spdkcli/tcp.sh@27 -- # waitforlisten 40111 00:07:16.678 04:44:30 -- common/autotest_common.sh@819 -- # '[' -z 40111 ']' 00:07:16.678 04:44:30 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:16.678 04:44:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.678 04:44:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:16.678 04:44:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.678 04:44:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:16.678 04:44:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.678 [2024-05-15 04:44:30.883395] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:16.678 [2024-05-15 04:44:30.883559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40111 ] 00:07:16.935 [2024-05-15 04:44:31.039424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.194 [2024-05-15 04:44:31.266976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:17.194 [2024-05-15 04:44:31.267981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.194 [2024-05-15 04:44:31.267982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.578 04:44:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.578 04:44:32 -- common/autotest_common.sh@852 -- # return 0 00:07:18.578 04:44:32 -- spdkcli/tcp.sh@31 -- # socat_pid=40144 00:07:18.578 04:44:32 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:18.578 04:44:32 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:18.578 [ 00:07:18.578 "spdk_get_version", 00:07:18.578 "rpc_get_methods", 00:07:18.578 "trace_get_info", 00:07:18.578 "trace_get_tpoint_group_mask", 00:07:18.578 "trace_disable_tpoint_group", 00:07:18.578 "trace_enable_tpoint_group", 00:07:18.578 "trace_clear_tpoint_mask", 00:07:18.578 "trace_set_tpoint_mask", 00:07:18.578 "framework_get_pci_devices", 00:07:18.578 "framework_get_config", 00:07:18.578 "framework_get_subsystems", 00:07:18.578 "iobuf_get_stats", 00:07:18.578 "iobuf_set_options", 00:07:18.578 "sock_set_default_impl", 00:07:18.578 "sock_impl_set_options", 00:07:18.578 "sock_impl_get_options", 00:07:18.578 "vmd_rescan", 00:07:18.578 "vmd_remove_device", 00:07:18.578 "vmd_enable", 00:07:18.578 "accel_get_stats", 00:07:18.578 "accel_set_options", 00:07:18.578 "accel_set_driver", 00:07:18.578 "accel_crypto_key_destroy", 00:07:18.578 "accel_crypto_keys_get", 00:07:18.578 "accel_crypto_key_create", 00:07:18.578 "accel_assign_opc", 00:07:18.578 "accel_get_module_info", 00:07:18.578 "accel_get_opc_assignments", 00:07:18.578 "notify_get_notifications", 00:07:18.578 "notify_get_types", 00:07:18.578 "bdev_get_histogram", 00:07:18.578 "bdev_enable_histogram", 00:07:18.578 "bdev_set_qos_limit", 00:07:18.578 "bdev_set_qd_sampling_period", 00:07:18.578 "bdev_get_bdevs", 00:07:18.578 "bdev_reset_iostat", 00:07:18.578 "bdev_get_iostat", 00:07:18.578 "bdev_examine", 00:07:18.578 "bdev_wait_for_examine", 00:07:18.578 "bdev_set_options", 00:07:18.578 "scsi_get_devices", 00:07:18.578 "thread_set_cpumask", 00:07:18.578 "framework_get_scheduler", 00:07:18.578 "framework_set_scheduler", 00:07:18.578 "framework_get_reactors", 00:07:18.578 "thread_get_io_channels", 00:07:18.578 "thread_get_pollers", 00:07:18.578 "thread_get_stats", 00:07:18.578 "framework_monitor_context_switch", 00:07:18.578 "spdk_kill_instance", 00:07:18.578 "log_enable_timestamps", 00:07:18.578 "log_get_flags", 00:07:18.578 "log_clear_flag", 00:07:18.578 "log_set_flag", 00:07:18.578 "log_get_level", 00:07:18.578 "log_set_level", 00:07:18.578 "log_get_print_level", 00:07:18.578 "log_set_print_level", 00:07:18.578 "framework_enable_cpumask_locks", 00:07:18.578 "framework_disable_cpumask_locks", 00:07:18.578 "framework_wait_init", 00:07:18.578 "framework_start_init", 00:07:18.578 "virtio_blk_create_transport", 00:07:18.578 "virtio_blk_get_transports", 00:07:18.578 "vhost_controller_set_coalescing", 00:07:18.578 "vhost_get_controllers", 00:07:18.578 "vhost_delete_controller", 00:07:18.578 "vhost_create_blk_controller", 00:07:18.578 "vhost_scsi_controller_remove_target", 00:07:18.578 "vhost_scsi_controller_add_target", 00:07:18.578 "vhost_start_scsi_controller", 00:07:18.578 "vhost_create_scsi_controller", 00:07:18.578 "nbd_get_disks", 00:07:18.578 "nbd_stop_disk", 00:07:18.578 "nbd_start_disk", 00:07:18.578 "env_dpdk_get_mem_stats", 00:07:18.578 "nvmf_subsystem_get_listeners", 00:07:18.578 "nvmf_subsystem_get_qpairs", 00:07:18.578 "nvmf_subsystem_get_controllers", 00:07:18.578 "nvmf_get_stats", 00:07:18.578 "nvmf_get_transports", 00:07:18.578 "nvmf_create_transport", 00:07:18.578 "nvmf_get_targets", 00:07:18.578 "nvmf_delete_target", 00:07:18.578 "nvmf_create_target", 00:07:18.578 "nvmf_subsystem_allow_any_host", 00:07:18.578 "nvmf_subsystem_remove_host", 00:07:18.578 "nvmf_subsystem_add_host", 00:07:18.578 "nvmf_subsystem_remove_ns", 00:07:18.578 "nvmf_subsystem_add_ns", 00:07:18.578 "nvmf_subsystem_listener_set_ana_state", 00:07:18.578 "nvmf_discovery_get_referrals", 00:07:18.578 "nvmf_discovery_remove_referral", 00:07:18.578 "nvmf_discovery_add_referral", 00:07:18.578 "nvmf_subsystem_remove_listener", 00:07:18.578 "nvmf_subsystem_add_listener", 00:07:18.578 "nvmf_delete_subsystem", 00:07:18.578 "nvmf_create_subsystem", 00:07:18.578 "nvmf_get_subsystems", 00:07:18.578 "nvmf_set_crdt", 00:07:18.578 "nvmf_set_config", 00:07:18.578 "nvmf_set_max_subsystems", 00:07:18.578 "iscsi_set_options", 00:07:18.578 "iscsi_get_auth_groups", 00:07:18.578 "iscsi_auth_group_remove_secret", 00:07:18.578 "iscsi_auth_group_add_secret", 00:07:18.578 "iscsi_delete_auth_group", 00:07:18.578 "iscsi_create_auth_group", 00:07:18.578 "iscsi_set_discovery_auth", 00:07:18.578 "iscsi_get_options", 00:07:18.578 "iscsi_target_node_request_logout", 00:07:18.578 "iscsi_target_node_set_redirect", 00:07:18.578 "iscsi_target_node_set_auth", 00:07:18.578 "iscsi_target_node_add_lun", 00:07:18.578 "iscsi_get_connections", 00:07:18.578 "iscsi_portal_group_set_auth", 00:07:18.578 "iscsi_start_portal_group", 00:07:18.578 "iscsi_delete_portal_group", 00:07:18.578 "iscsi_create_portal_group", 00:07:18.578 "iscsi_get_portal_groups", 00:07:18.578 "iscsi_delete_target_node", 00:07:18.578 "iscsi_target_node_remove_pg_ig_maps", 00:07:18.578 "iscsi_target_node_add_pg_ig_maps", 00:07:18.578 "iscsi_create_target_node", 00:07:18.578 "iscsi_get_target_nodes", 00:07:18.578 "iscsi_delete_initiator_group", 00:07:18.578 "iscsi_initiator_group_remove_initiators", 00:07:18.578 "iscsi_initiator_group_add_initiators", 00:07:18.578 "iscsi_create_initiator_group", 00:07:18.578 "iscsi_get_initiator_groups", 00:07:18.578 "iaa_scan_accel_module", 00:07:18.578 "dsa_scan_accel_module", 00:07:18.578 "ioat_scan_accel_module", 00:07:18.578 "accel_error_inject_error", 00:07:18.578 "bdev_daos_resize", 00:07:18.578 "bdev_daos_delete", 00:07:18.578 "bdev_daos_create", 00:07:18.578 "bdev_virtio_attach_controller", 00:07:18.578 "bdev_virtio_scsi_get_devices", 00:07:18.578 "bdev_virtio_detach_controller", 00:07:18.578 "bdev_virtio_blk_set_hotplug", 00:07:18.578 "bdev_ftl_set_property", 00:07:18.578 "bdev_ftl_get_properties", 00:07:18.578 "bdev_ftl_get_stats", 00:07:18.578 "bdev_ftl_unmap", 00:07:18.578 "bdev_ftl_unload", 00:07:18.578 "bdev_ftl_delete", 00:07:18.578 "bdev_ftl_load", 00:07:18.578 "bdev_ftl_create", 00:07:18.578 "bdev_aio_delete", 00:07:18.578 "bdev_aio_rescan", 00:07:18.578 "bdev_aio_create", 00:07:18.578 "blobfs_create", 00:07:18.578 "blobfs_detect", 00:07:18.578 "blobfs_set_cache_size", 00:07:18.578 "bdev_zone_block_delete", 00:07:18.578 "bdev_zone_block_create", 00:07:18.578 "bdev_delay_delete", 00:07:18.578 "bdev_delay_create", 00:07:18.578 "bdev_delay_update_latency", 00:07:18.578 "bdev_split_delete", 00:07:18.578 "bdev_split_create", 00:07:18.578 "bdev_error_inject_error", 00:07:18.578 "bdev_error_delete", 00:07:18.578 "bdev_error_create", 00:07:18.578 "bdev_raid_set_options", 00:07:18.578 "bdev_raid_remove_base_bdev", 00:07:18.578 "bdev_raid_add_base_bdev", 00:07:18.578 "bdev_raid_delete", 00:07:18.578 "bdev_raid_create", 00:07:18.578 "bdev_raid_get_bdevs", 00:07:18.578 "bdev_lvol_grow_lvstore", 00:07:18.578 "bdev_lvol_get_lvols", 00:07:18.578 "bdev_lvol_get_lvstores", 00:07:18.578 "bdev_lvol_delete", 00:07:18.578 "bdev_lvol_set_read_only", 00:07:18.578 "bdev_lvol_resize", 00:07:18.578 "bdev_lvol_decouple_parent", 00:07:18.578 "bdev_lvol_inflate", 00:07:18.578 "bdev_lvol_rename", 00:07:18.578 "bdev_lvol_clone_bdev", 00:07:18.578 "bdev_lvol_clone", 00:07:18.578 "bdev_lvol_snapshot", 00:07:18.578 "bdev_lvol_create", 00:07:18.578 "bdev_lvol_delete_lvstore", 00:07:18.578 "bdev_lvol_rename_lvstore", 00:07:18.578 "bdev_lvol_create_lvstore", 00:07:18.578 "bdev_passthru_delete", 00:07:18.578 "bdev_passthru_create", 00:07:18.578 "bdev_nvme_cuse_unregister", 00:07:18.578 "bdev_nvme_cuse_register", 00:07:18.578 "bdev_opal_new_user", 00:07:18.578 "bdev_opal_set_lock_state", 00:07:18.578 "bdev_opal_delete", 00:07:18.578 "bdev_opal_get_info", 00:07:18.578 "bdev_opal_create", 00:07:18.578 "bdev_nvme_opal_revert", 00:07:18.579 "bdev_nvme_opal_init", 00:07:18.579 "bdev_nvme_send_cmd", 00:07:18.579 "bdev_nvme_get_path_iostat", 00:07:18.579 "bdev_nvme_get_mdns_discovery_info", 00:07:18.579 "bdev_nvme_stop_mdns_discovery", 00:07:18.579 "bdev_nvme_start_mdns_discovery", 00:07:18.579 "bdev_nvme_set_multipath_policy", 00:07:18.579 "bdev_nvme_set_preferred_path", 00:07:18.579 "bdev_nvme_get_io_paths", 00:07:18.579 "bdev_nvme_remove_error_injection", 00:07:18.579 "bdev_nvme_add_error_injection", 00:07:18.579 "bdev_nvme_get_discovery_info", 00:07:18.579 "bdev_nvme_stop_discovery", 00:07:18.579 "bdev_nvme_start_discovery", 00:07:18.579 "bdev_nvme_get_controller_health_info", 00:07:18.579 "bdev_nvme_disable_controller", 00:07:18.579 "bdev_nvme_enable_controller", 00:07:18.579 "bdev_nvme_reset_controller", 00:07:18.579 "bdev_nvme_get_transport_statistics", 00:07:18.579 "bdev_nvme_apply_firmware", 00:07:18.579 "bdev_nvme_detach_controller", 00:07:18.579 "bdev_nvme_get_controllers", 00:07:18.579 "bdev_nvme_attach_controller", 00:07:18.579 "bdev_nvme_set_hotplug", 00:07:18.579 "bdev_nvme_set_options", 00:07:18.579 "bdev_null_resize", 00:07:18.579 "bdev_null_delete", 00:07:18.579 "bdev_null_create", 00:07:18.579 "bdev_malloc_delete", 00:07:18.579 "bdev_malloc_create" 00:07:18.579 ] 00:07:18.579 04:44:32 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:18.579 04:44:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:18.579 04:44:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.579 04:44:32 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:18.579 04:44:32 -- spdkcli/tcp.sh@38 -- # killprocess 40111 00:07:18.579 04:44:32 -- common/autotest_common.sh@926 -- # '[' -z 40111 ']' 00:07:18.579 04:44:32 -- common/autotest_common.sh@930 -- # kill -0 40111 00:07:18.579 04:44:32 -- common/autotest_common.sh@931 -- # uname 00:07:18.579 04:44:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:18.579 04:44:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40111 00:07:18.579 killing process with pid 40111 00:07:18.579 04:44:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:18.579 04:44:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:18.579 04:44:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40111' 00:07:18.579 04:44:32 -- common/autotest_common.sh@945 -- # kill 40111 00:07:18.579 04:44:32 -- common/autotest_common.sh@950 -- # wait 40111 00:07:21.130 00:07:21.130 real 0m4.624s 00:07:21.130 user 0m8.117s 00:07:21.130 sys 0m0.772s 00:07:21.130 04:44:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.130 ************************************ 00:07:21.130 END TEST spdkcli_tcp 00:07:21.130 ************************************ 00:07:21.130 04:44:35 -- common/autotest_common.sh@10 -- # set +x 00:07:21.130 04:44:35 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:21.130 04:44:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:21.130 04:44:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.130 04:44:35 -- common/autotest_common.sh@10 -- # set +x 00:07:21.130 ************************************ 00:07:21.130 START TEST dpdk_mem_utility 00:07:21.130 ************************************ 00:07:21.130 04:44:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:21.389 * Looking for test storage... 00:07:21.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:21.389 04:44:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:21.389 04:44:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=40259 00:07:21.389 04:44:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 40259 00:07:21.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.389 04:44:35 -- common/autotest_common.sh@819 -- # '[' -z 40259 ']' 00:07:21.389 04:44:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.389 04:44:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:21.389 04:44:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.389 04:44:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:21.389 04:44:35 -- common/autotest_common.sh@10 -- # set +x 00:07:21.389 04:44:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:21.389 [2024-05-15 04:44:35.545909] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:21.389 [2024-05-15 04:44:35.546071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40259 ] 00:07:21.648 [2024-05-15 04:44:35.702472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.907 [2024-05-15 04:44:35.921509] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:21.907 [2024-05-15 04:44:35.921834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.843 04:44:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:22.843 04:44:37 -- common/autotest_common.sh@852 -- # return 0 00:07:22.843 04:44:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:22.843 04:44:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:22.843 04:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.843 04:44:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.843 { 00:07:22.843 "filename": "/tmp/spdk_mem_dump.txt" 00:07:22.843 } 00:07:22.843 04:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.843 04:44:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:23.104 DPDK memory size 868.000000 MiB in 1 heap(s) 00:07:23.104 1 heaps totaling size 868.000000 MiB 00:07:23.104 size: 868.000000 MiB heap id: 0 00:07:23.104 end heaps---------- 00:07:23.104 8 mempools totaling size 646.224487 MiB 00:07:23.104 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:23.104 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:23.104 size: 132.629456 MiB name: bdev_io_40259 00:07:23.104 size: 51.011292 MiB name: evtpool_40259 00:07:23.104 size: 50.003479 MiB name: msgpool_40259 00:07:23.104 size: 21.763794 MiB name: PDU_Pool 00:07:23.104 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:23.104 size: 0.026123 MiB name: Session_Pool 00:07:23.104 end mempools------- 00:07:23.104 6 memzones totaling size 4.142822 MiB 00:07:23.105 size: 1.000366 MiB name: RG_ring_0_40259 00:07:23.105 size: 1.000366 MiB name: RG_ring_1_40259 00:07:23.105 size: 1.000366 MiB name: RG_ring_4_40259 00:07:23.105 size: 1.000366 MiB name: RG_ring_5_40259 00:07:23.105 size: 0.125366 MiB name: RG_ring_2_40259 00:07:23.105 size: 0.015991 MiB name: RG_ring_3_40259 00:07:23.105 end memzones------- 00:07:23.105 04:44:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:23.105 heap id: 0 total size: 868.000000 MiB number of busy elements: 267 number of free elements: 18 00:07:23.105 list of free elements. size: 18.351196 MiB 00:07:23.105 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:23.105 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:23.105 element at address: 0x200007000000 with size: 1.995972 MiB 00:07:23.105 element at address: 0x20000b200000 with size: 1.995972 MiB 00:07:23.105 element at address: 0x20001c100040 with size: 0.999939 MiB 00:07:23.105 element at address: 0x20001c500040 with size: 0.999939 MiB 00:07:23.105 element at address: 0x20001c600000 with size: 0.999084 MiB 00:07:23.105 element at address: 0x200003e00000 with size: 0.996094 MiB 00:07:23.105 element at address: 0x200035200000 with size: 0.994324 MiB 00:07:23.105 element at address: 0x20001be00000 with size: 0.959656 MiB 00:07:23.105 element at address: 0x20001c900040 with size: 0.936401 MiB 00:07:23.105 element at address: 0x200000200000 with size: 0.833862 MiB 00:07:23.105 element at address: 0x20001e000000 with size: 0.563171 MiB 00:07:23.105 element at address: 0x20001c200000 with size: 0.487976 MiB 00:07:23.105 element at address: 0x20001ca00000 with size: 0.485413 MiB 00:07:23.105 element at address: 0x20002b400000 with size: 0.397766 MiB 00:07:23.105 element at address: 0x200013800000 with size: 0.359985 MiB 00:07:23.105 element at address: 0x200003a00000 with size: 0.349304 MiB 00:07:23.105 list of standard malloc elements. size: 199.276001 MiB 00:07:23.105 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:07:23.105 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:07:23.105 element at address: 0x20001bffff80 with size: 1.000183 MiB 00:07:23.105 element at address: 0x20001c3fff80 with size: 1.000183 MiB 00:07:23.105 element at address: 0x20001c7fff80 with size: 1.000183 MiB 00:07:23.105 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:23.105 element at address: 0x20001c9eff40 with size: 0.062683 MiB 00:07:23.105 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:23.105 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:07:23.105 element at address: 0x20001c9efdc0 with size: 0.000366 MiB 00:07:23.105 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:07:23.105 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a596c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a597c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a598c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a599c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a59ac0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a59bc0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a59cc0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a59dc0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a59ec0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a59fc0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5a0c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003aff980 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001385c280 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001385c380 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001385c480 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001385c580 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001385c680 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001385c780 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001385c880 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001385c980 with size: 0.000244 MiB 00:07:23.105 element at address: 0x2000138dccc0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001befdd00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27cec0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27cfc0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27d0c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27d1c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27d2c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27d3c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27d4c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27d5c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27d6c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27d7c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27d8c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c27d9c0 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c2fdd00 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c6ffc40 with size: 0.000244 MiB 00:07:23.105 element at address: 0x20001c9efbc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001c9efcc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001cabc680 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0902c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0903c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0904c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0905c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0906c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0907c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0908c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0909c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e090ac0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e090bc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e090cc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e090dc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e090ec0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e090fc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0910c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0911c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0912c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0913c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0914c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0915c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0916c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0917c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0918c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0919c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e091ac0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e091bc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e091cc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e091dc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e091ec0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e091fc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0920c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0921c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0922c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0923c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0924c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0925c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0926c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0927c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0928c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0929c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e092ac0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e092bc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e092cc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e092dc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e092ec0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e092fc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0930c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0931c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0932c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0933c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0934c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0935c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0936c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0937c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0938c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0939c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e093ac0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e093bc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e093cc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e093dc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e093ec0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e093fc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0940c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0941c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0942c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0943c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0944c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0945c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0946c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0947c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0948c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0949c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e094ac0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e094bc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e094cc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e094dc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e094ec0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e094fc0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0950c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0951c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0952c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20001e0953c0 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b465d40 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b465e40 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46cb00 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46cd80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46ce80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46cf80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46d080 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46d180 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46d280 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46d380 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46d480 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46d580 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46d680 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46d780 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46d880 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46d980 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46da80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46db80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46dc80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46dd80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46de80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46df80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46e080 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46e180 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46e280 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46e380 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46e480 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46e580 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46e680 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46e780 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46e880 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46e980 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46ea80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46eb80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46ec80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46ed80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46ee80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46ef80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46f080 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46f180 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46f280 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46f380 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46f480 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46f580 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46f680 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46f780 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46f880 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46f980 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46fa80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46fb80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46fc80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46fd80 with size: 0.000244 MiB 00:07:23.106 element at address: 0x20002b46fe80 with size: 0.000244 MiB 00:07:23.106 list of memzone associated elements. size: 650.372803 MiB 00:07:23.106 element at address: 0x20001e0954c0 with size: 211.416809 MiB 00:07:23.106 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:23.106 element at address: 0x20002b46ff80 with size: 157.562622 MiB 00:07:23.106 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:23.106 element at address: 0x2000139def40 with size: 132.129089 MiB 00:07:23.106 associated memzone info: size: 132.128906 MiB name: MP_bdev_io_40259_0 00:07:23.106 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:23.106 associated memzone info: size: 48.002930 MiB name: MP_evtpool_40259_0 00:07:23.106 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:23.106 associated memzone info: size: 48.002930 MiB name: MP_msgpool_40259_0 00:07:23.106 element at address: 0x20001cbbe900 with size: 20.255615 MiB 00:07:23.106 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:23.106 element at address: 0x2000353feb00 with size: 18.005127 MiB 00:07:23.106 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:23.106 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:23.106 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_40259 00:07:23.106 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:23.106 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_40259 00:07:23.106 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:23.106 associated memzone info: size: 1.007996 MiB name: MP_evtpool_40259 00:07:23.106 element at address: 0x20001c2fde00 with size: 1.008179 MiB 00:07:23.107 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:23.107 element at address: 0x20001cabc780 with size: 1.008179 MiB 00:07:23.107 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:23.107 element at address: 0x20001befde00 with size: 1.008179 MiB 00:07:23.107 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:23.107 element at address: 0x2000138dcdc0 with size: 1.008179 MiB 00:07:23.107 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:23.107 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:23.107 associated memzone info: size: 1.000366 MiB name: RG_ring_0_40259 00:07:23.107 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:23.107 associated memzone info: size: 1.000366 MiB name: RG_ring_1_40259 00:07:23.107 element at address: 0x20001c6ffd40 with size: 1.000549 MiB 00:07:23.107 associated memzone info: size: 1.000366 MiB name: RG_ring_4_40259 00:07:23.107 element at address: 0x2000352fe8c0 with size: 1.000549 MiB 00:07:23.107 associated memzone info: size: 1.000366 MiB name: RG_ring_5_40259 00:07:23.107 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:07:23.107 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_40259 00:07:23.107 element at address: 0x20001c27dac0 with size: 0.500549 MiB 00:07:23.107 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:23.107 element at address: 0x20001385ca80 with size: 0.500549 MiB 00:07:23.107 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:23.107 element at address: 0x20001ca7c440 with size: 0.250549 MiB 00:07:23.107 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:23.107 element at address: 0x200003adf740 with size: 0.125549 MiB 00:07:23.107 associated memzone info: size: 0.125366 MiB name: RG_ring_2_40259 00:07:23.107 element at address: 0x20001bef5ac0 with size: 0.031799 MiB 00:07:23.107 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:23.107 element at address: 0x20002b465f40 with size: 0.023804 MiB 00:07:23.107 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:23.107 element at address: 0x200003adb500 with size: 0.016174 MiB 00:07:23.107 associated memzone info: size: 0.015991 MiB name: RG_ring_3_40259 00:07:23.107 element at address: 0x20002b46c0c0 with size: 0.002502 MiB 00:07:23.107 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:23.107 element at address: 0x2000002d6880 with size: 0.000366 MiB 00:07:23.107 associated memzone info: size: 0.000183 MiB name: MP_msgpool_40259 00:07:23.107 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:07:23.107 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_40259 00:07:23.107 element at address: 0x20002b46cc00 with size: 0.000366 MiB 00:07:23.107 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:23.107 04:44:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:23.107 04:44:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 40259 00:07:23.107 04:44:37 -- common/autotest_common.sh@926 -- # '[' -z 40259 ']' 00:07:23.107 04:44:37 -- common/autotest_common.sh@930 -- # kill -0 40259 00:07:23.107 04:44:37 -- common/autotest_common.sh@931 -- # uname 00:07:23.107 04:44:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:23.107 04:44:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40259 00:07:23.107 killing process with pid 40259 00:07:23.107 04:44:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:23.107 04:44:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:23.107 04:44:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40259' 00:07:23.107 04:44:37 -- common/autotest_common.sh@945 -- # kill 40259 00:07:23.107 04:44:37 -- common/autotest_common.sh@950 -- # wait 40259 00:07:25.643 ************************************ 00:07:25.643 END TEST dpdk_mem_utility 00:07:25.643 ************************************ 00:07:25.643 00:07:25.643 real 0m4.448s 00:07:25.643 user 0m4.267s 00:07:25.643 sys 0m0.734s 00:07:25.643 04:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.643 04:44:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.643 04:44:39 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:25.643 04:44:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:25.643 04:44:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.643 04:44:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.643 ************************************ 00:07:25.643 START TEST event 00:07:25.643 ************************************ 00:07:25.643 04:44:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:25.901 * Looking for test storage... 00:07:25.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:25.901 04:44:39 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:25.901 04:44:39 -- bdev/nbd_common.sh@6 -- # set -e 00:07:25.901 04:44:39 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:25.901 04:44:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:25.901 04:44:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.901 04:44:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.901 ************************************ 00:07:25.901 START TEST event_perf 00:07:25.901 ************************************ 00:07:25.901 04:44:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:25.901 Running I/O for 1 seconds...[2024-05-15 04:44:39.957846] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:25.901 [2024-05-15 04:44:39.958066] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40382 ] 00:07:26.159 [2024-05-15 04:44:40.133256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.159 [2024-05-15 04:44:40.388998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.159 [2024-05-15 04:44:40.389109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.159 [2024-05-15 04:44:40.389264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.159 [2024-05-15 04:44:40.389267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.796 Running I/O for 1 seconds... 00:07:27.796 lcore 0: 282412 00:07:27.796 lcore 1: 282409 00:07:27.796 lcore 2: 282411 00:07:27.796 lcore 3: 282410 00:07:27.796 done. 00:07:27.796 ************************************ 00:07:27.796 END TEST event_perf 00:07:27.796 ************************************ 00:07:27.796 00:07:27.796 real 0m1.967s 00:07:27.796 user 0m4.721s 00:07:27.796 sys 0m0.151s 00:07:27.796 04:44:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.796 04:44:41 -- common/autotest_common.sh@10 -- # set +x 00:07:27.796 04:44:41 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:27.796 04:44:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:27.796 04:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.796 04:44:41 -- common/autotest_common.sh@10 -- # set +x 00:07:27.796 ************************************ 00:07:27.796 START TEST event_reactor 00:07:27.796 ************************************ 00:07:27.796 04:44:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:27.796 [2024-05-15 04:44:41.984182] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:27.796 [2024-05-15 04:44:41.984408] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40435 ] 00:07:28.053 [2024-05-15 04:44:42.161599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.312 [2024-05-15 04:44:42.401628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.687 test_start 00:07:29.687 oneshot 00:07:29.687 tick 100 00:07:29.687 tick 100 00:07:29.687 tick 250 00:07:29.687 tick 100 00:07:29.687 tick 100 00:07:29.687 tick 100 00:07:29.687 tick 250 00:07:29.687 tick 500 00:07:29.687 tick 100 00:07:29.687 tick 100 00:07:29.687 tick 250 00:07:29.687 tick 100 00:07:29.687 tick 100 00:07:29.687 test_end 00:07:29.687 ************************************ 00:07:29.687 END TEST event_reactor 00:07:29.687 ************************************ 00:07:29.687 00:07:29.687 real 0m1.912s 00:07:29.687 user 0m1.648s 00:07:29.687 sys 0m0.163s 00:07:29.687 04:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.687 04:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.687 04:44:43 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:29.687 04:44:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:29.687 04:44:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.687 04:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.687 ************************************ 00:07:29.687 START TEST event_reactor_perf 00:07:29.687 ************************************ 00:07:29.687 04:44:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:29.946 [2024-05-15 04:44:43.951120] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:29.946 [2024-05-15 04:44:43.951357] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40485 ] 00:07:29.946 [2024-05-15 04:44:44.134161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.204 [2024-05-15 04:44:44.372939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.582 test_start 00:07:31.582 test_end 00:07:31.582 Performance: 822956 events per second 00:07:31.582 ************************************ 00:07:31.582 END TEST event_reactor_perf 00:07:31.582 ************************************ 00:07:31.582 00:07:31.582 real 0m1.892s 00:07:31.582 user 0m1.642s 00:07:31.582 sys 0m0.150s 00:07:31.582 04:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.582 04:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.841 04:44:45 -- event/event.sh@49 -- # uname -s 00:07:31.841 04:44:45 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:31.841 04:44:45 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:31.841 04:44:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:31.841 04:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.841 04:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.841 ************************************ 00:07:31.841 START TEST event_scheduler 00:07:31.841 ************************************ 00:07:31.841 04:44:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:31.841 * Looking for test storage... 00:07:31.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:31.841 04:44:45 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:31.841 04:44:45 -- scheduler/scheduler.sh@35 -- # scheduler_pid=40574 00:07:31.841 04:44:45 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:31.841 04:44:45 -- scheduler/scheduler.sh@37 -- # waitforlisten 40574 00:07:31.841 04:44:45 -- common/autotest_common.sh@819 -- # '[' -z 40574 ']' 00:07:31.841 04:44:45 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:31.841 04:44:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.841 04:44:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:31.841 04:44:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.841 04:44:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:31.841 04:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.100 [2024-05-15 04:44:46.110406] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:32.100 [2024-05-15 04:44:46.110577] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40574 ] 00:07:32.100 [2024-05-15 04:44:46.269457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.359 [2024-05-15 04:44:46.564391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.359 [2024-05-15 04:44:46.564551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.359 [2024-05-15 04:44:46.564774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.359 [2024-05-15 04:44:46.564854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.927 04:44:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:32.927 04:44:46 -- common/autotest_common.sh@852 -- # return 0 00:07:32.927 04:44:46 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:32.927 04:44:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.927 04:44:46 -- common/autotest_common.sh@10 -- # set +x 00:07:32.927 POWER: Env isn't set yet! 00:07:32.927 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:32.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:32.927 POWER: Cannot set governor of lcore 0 to userspace 00:07:32.927 POWER: Attempting to initialise PSTAT power management... 00:07:32.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:32.927 POWER: Cannot set governor of lcore 0 to performance 00:07:32.927 POWER: Attempting to initialise AMD PSTATE power management... 00:07:32.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:32.927 POWER: Cannot set governor of lcore 0 to userspace 00:07:32.927 POWER: Attempting to initialise CPPC power management... 00:07:32.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:32.927 POWER: Cannot set governor of lcore 0 to userspace 00:07:32.927 POWER: Attempting to initialise VM power management... 00:07:32.927 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:32.927 POWER: Unable to set Power Management Environment for lcore 0 00:07:32.927 [2024-05-15 04:44:46.903772] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:07:32.927 [2024-05-15 04:44:46.903796] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:07:32.927 [2024-05-15 04:44:46.903834] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:07:32.927 04:44:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.927 04:44:46 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:32.927 04:44:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.927 04:44:46 -- common/autotest_common.sh@10 -- # set +x 00:07:33.186 [2024-05-15 04:44:47.372200] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:33.186 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.186 04:44:47 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:33.186 04:44:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.186 04:44:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.186 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.186 ************************************ 00:07:33.186 START TEST scheduler_create_thread 00:07:33.186 ************************************ 00:07:33.186 04:44:47 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:07:33.186 04:44:47 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:33.186 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.186 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.186 2 00:07:33.186 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.186 04:44:47 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:33.186 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.186 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.186 3 00:07:33.187 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.187 04:44:47 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:33.187 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.187 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 4 00:07:33.457 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.457 04:44:47 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:33.457 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.457 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 5 00:07:33.457 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.457 04:44:47 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:33.457 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.457 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 6 00:07:33.457 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.457 04:44:47 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:33.457 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.457 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 7 00:07:33.457 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.457 04:44:47 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:33.457 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.457 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 8 00:07:33.457 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.457 04:44:47 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:33.457 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.457 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 9 00:07:33.457 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.457 04:44:47 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:33.457 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.457 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 10 00:07:33.457 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.457 04:44:47 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:33.457 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.457 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.457 04:44:47 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:33.457 04:44:47 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:33.457 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.457 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.457 04:44:47 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:33.457 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.457 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.457 04:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.458 04:44:47 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:33.458 04:44:47 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:33.458 04:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.458 04:44:47 -- common/autotest_common.sh@10 -- # set +x 00:07:34.396 ************************************ 00:07:34.396 END TEST scheduler_create_thread 00:07:34.396 ************************************ 00:07:34.396 04:44:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:34.396 00:07:34.396 real 0m1.171s 00:07:34.396 user 0m0.009s 00:07:34.396 sys 0m0.005s 00:07:34.396 04:44:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.396 04:44:48 -- common/autotest_common.sh@10 -- # set +x 00:07:34.396 04:44:48 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:34.396 04:44:48 -- scheduler/scheduler.sh@46 -- # killprocess 40574 00:07:34.396 04:44:48 -- common/autotest_common.sh@926 -- # '[' -z 40574 ']' 00:07:34.396 04:44:48 -- common/autotest_common.sh@930 -- # kill -0 40574 00:07:34.396 04:44:48 -- common/autotest_common.sh@931 -- # uname 00:07:34.396 04:44:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:34.396 04:44:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40574 00:07:34.396 killing process with pid 40574 00:07:34.396 04:44:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:34.396 04:44:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:34.396 04:44:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40574' 00:07:34.396 04:44:48 -- common/autotest_common.sh@945 -- # kill 40574 00:07:34.396 04:44:48 -- common/autotest_common.sh@950 -- # wait 40574 00:07:34.965 [2024-05-15 04:44:49.038378] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:36.353 ************************************ 00:07:36.353 END TEST event_scheduler 00:07:36.353 ************************************ 00:07:36.353 00:07:36.353 real 0m4.547s 00:07:36.353 user 0m7.722s 00:07:36.353 sys 0m0.518s 00:07:36.353 04:44:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.353 04:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.353 04:44:50 -- event/event.sh@51 -- # modprobe -n nbd 00:07:36.353 modprobe: FATAL: Module nbd not found. 00:07:36.353 04:44:50 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:36.353 04:44:50 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:36.353 04:44:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.353 04:44:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.353 04:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.353 ************************************ 00:07:36.353 START TEST cpu_locks 00:07:36.353 ************************************ 00:07:36.353 04:44:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:36.353 * Looking for test storage... 00:07:36.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:36.353 04:44:50 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:36.353 04:44:50 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:36.353 04:44:50 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:36.353 04:44:50 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:36.353 04:44:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.353 04:44:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.353 04:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.353 ************************************ 00:07:36.353 START TEST default_locks 00:07:36.353 ************************************ 00:07:36.353 04:44:50 -- common/autotest_common.sh@1104 -- # default_locks 00:07:36.353 04:44:50 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=40728 00:07:36.353 04:44:50 -- event/cpu_locks.sh@47 -- # waitforlisten 40728 00:07:36.353 04:44:50 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.353 04:44:50 -- common/autotest_common.sh@819 -- # '[' -z 40728 ']' 00:07:36.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.353 04:44:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.353 04:44:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.353 04:44:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.353 04:44:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.353 04:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.612 [2024-05-15 04:44:50.704785] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:36.612 [2024-05-15 04:44:50.704965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40728 ] 00:07:36.871 [2024-05-15 04:44:50.861874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.130 [2024-05-15 04:44:51.102637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.130 [2024-05-15 04:44:51.102999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.044 04:44:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:39.044 04:44:52 -- common/autotest_common.sh@852 -- # return 0 00:07:39.044 04:44:52 -- event/cpu_locks.sh@49 -- # locks_exist 40728 00:07:39.044 04:44:52 -- event/cpu_locks.sh@22 -- # lslocks -p 40728 00:07:39.044 04:44:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:39.615 04:44:53 -- event/cpu_locks.sh@50 -- # killprocess 40728 00:07:39.615 04:44:53 -- common/autotest_common.sh@926 -- # '[' -z 40728 ']' 00:07:39.615 04:44:53 -- common/autotest_common.sh@930 -- # kill -0 40728 00:07:39.615 04:44:53 -- common/autotest_common.sh@931 -- # uname 00:07:39.616 04:44:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:39.616 04:44:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40728 00:07:39.616 killing process with pid 40728 00:07:39.616 04:44:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:39.616 04:44:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:39.616 04:44:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40728' 00:07:39.616 04:44:53 -- common/autotest_common.sh@945 -- # kill 40728 00:07:39.616 04:44:53 -- common/autotest_common.sh@950 -- # wait 40728 00:07:42.148 04:44:56 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 40728 00:07:42.148 04:44:56 -- common/autotest_common.sh@640 -- # local es=0 00:07:42.148 04:44:56 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 40728 00:07:42.148 04:44:56 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:42.148 04:44:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.149 04:44:56 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:42.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.149 04:44:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.149 04:44:56 -- common/autotest_common.sh@643 -- # waitforlisten 40728 00:07:42.149 04:44:56 -- common/autotest_common.sh@819 -- # '[' -z 40728 ']' 00:07:42.149 04:44:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.149 04:44:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:42.149 04:44:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.149 04:44:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:42.149 04:44:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.149 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (40728) - No such process 00:07:42.149 ERROR: process (pid: 40728) is no longer running 00:07:42.149 04:44:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:42.149 04:44:56 -- common/autotest_common.sh@852 -- # return 1 00:07:42.149 ************************************ 00:07:42.149 END TEST default_locks 00:07:42.149 ************************************ 00:07:42.149 04:44:56 -- common/autotest_common.sh@643 -- # es=1 00:07:42.149 04:44:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:42.149 04:44:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:42.149 04:44:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:42.149 04:44:56 -- event/cpu_locks.sh@54 -- # no_locks 00:07:42.149 04:44:56 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:07:42.149 04:44:56 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:42.149 04:44:56 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:42.149 00:07:42.149 real 0m5.774s 00:07:42.149 user 0m5.910s 00:07:42.149 sys 0m1.326s 00:07:42.149 04:44:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.149 04:44:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.149 04:44:56 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:42.149 04:44:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:42.149 04:44:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.149 04:44:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.408 ************************************ 00:07:42.408 START TEST default_locks_via_rpc 00:07:42.408 ************************************ 00:07:42.408 04:44:56 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:07:42.408 04:44:56 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=40834 00:07:42.408 04:44:56 -- event/cpu_locks.sh@63 -- # waitforlisten 40834 00:07:42.408 04:44:56 -- common/autotest_common.sh@819 -- # '[' -z 40834 ']' 00:07:42.408 04:44:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.408 04:44:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:42.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.408 04:44:56 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.408 04:44:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.408 04:44:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:42.408 04:44:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.408 [2024-05-15 04:44:56.534150] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:42.408 [2024-05-15 04:44:56.534317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40834 ] 00:07:42.667 [2024-05-15 04:44:56.687046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.926 [2024-05-15 04:44:56.913146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:42.926 [2024-05-15 04:44:56.913353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.303 04:44:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:44.303 04:44:58 -- common/autotest_common.sh@852 -- # return 0 00:07:44.303 04:44:58 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:44.303 04:44:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:44.304 04:44:58 -- common/autotest_common.sh@10 -- # set +x 00:07:44.304 04:44:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:44.304 04:44:58 -- event/cpu_locks.sh@67 -- # no_locks 00:07:44.304 04:44:58 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:07:44.304 04:44:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:44.304 04:44:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:44.304 04:44:58 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:44.304 04:44:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:44.304 04:44:58 -- common/autotest_common.sh@10 -- # set +x 00:07:44.304 04:44:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:44.304 04:44:58 -- event/cpu_locks.sh@71 -- # locks_exist 40834 00:07:44.304 04:44:58 -- event/cpu_locks.sh@22 -- # lslocks -p 40834 00:07:44.304 04:44:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:44.871 04:44:58 -- event/cpu_locks.sh@73 -- # killprocess 40834 00:07:44.871 04:44:58 -- common/autotest_common.sh@926 -- # '[' -z 40834 ']' 00:07:44.871 04:44:58 -- common/autotest_common.sh@930 -- # kill -0 40834 00:07:44.871 04:44:58 -- common/autotest_common.sh@931 -- # uname 00:07:44.871 04:44:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:44.871 04:44:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40834 00:07:44.871 killing process with pid 40834 00:07:44.871 04:44:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:44.871 04:44:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:44.871 04:44:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40834' 00:07:44.871 04:44:58 -- common/autotest_common.sh@945 -- # kill 40834 00:07:44.871 04:44:58 -- common/autotest_common.sh@950 -- # wait 40834 00:07:47.406 ************************************ 00:07:47.406 END TEST default_locks_via_rpc 00:07:47.406 ************************************ 00:07:47.406 00:07:47.406 real 0m5.191s 00:07:47.406 user 0m5.272s 00:07:47.406 sys 0m1.272s 00:07:47.406 04:45:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.406 04:45:01 -- common/autotest_common.sh@10 -- # set +x 00:07:47.406 04:45:01 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:47.406 04:45:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:47.406 04:45:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.406 04:45:01 -- common/autotest_common.sh@10 -- # set +x 00:07:47.673 ************************************ 00:07:47.673 START TEST non_locking_app_on_locked_coremask 00:07:47.673 ************************************ 00:07:47.673 04:45:01 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:07:47.673 04:45:01 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=40932 00:07:47.673 04:45:01 -- event/cpu_locks.sh@81 -- # waitforlisten 40932 /var/tmp/spdk.sock 00:07:47.673 04:45:01 -- common/autotest_common.sh@819 -- # '[' -z 40932 ']' 00:07:47.673 04:45:01 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:47.673 04:45:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.673 04:45:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:47.673 04:45:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.673 04:45:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:47.673 04:45:01 -- common/autotest_common.sh@10 -- # set +x 00:07:47.673 [2024-05-15 04:45:01.786644] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:47.673 [2024-05-15 04:45:01.787002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40932 ] 00:07:47.932 [2024-05-15 04:45:01.969075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.189 [2024-05-15 04:45:02.204952] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:48.189 [2024-05-15 04:45:02.205170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.090 04:45:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:50.090 04:45:03 -- common/autotest_common.sh@852 -- # return 0 00:07:50.090 04:45:03 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=40976 00:07:50.090 04:45:03 -- event/cpu_locks.sh@85 -- # waitforlisten 40976 /var/tmp/spdk2.sock 00:07:50.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.090 04:45:03 -- common/autotest_common.sh@819 -- # '[' -z 40976 ']' 00:07:50.090 04:45:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.090 04:45:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:50.090 04:45:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.090 04:45:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:50.090 04:45:03 -- common/autotest_common.sh@10 -- # set +x 00:07:50.090 04:45:03 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:50.090 [2024-05-15 04:45:04.126009] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:50.090 [2024-05-15 04:45:04.126187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40976 ] 00:07:50.090 [2024-05-15 04:45:04.295967] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:50.090 [2024-05-15 04:45:04.296044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.658 [2024-05-15 04:45:04.782363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.658 [2024-05-15 04:45:04.782578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.193 04:45:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:53.193 04:45:07 -- common/autotest_common.sh@852 -- # return 0 00:07:53.193 04:45:07 -- event/cpu_locks.sh@87 -- # locks_exist 40932 00:07:53.193 04:45:07 -- event/cpu_locks.sh@22 -- # lslocks -p 40932 00:07:53.193 04:45:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:54.569 04:45:08 -- event/cpu_locks.sh@89 -- # killprocess 40932 00:07:54.569 04:45:08 -- common/autotest_common.sh@926 -- # '[' -z 40932 ']' 00:07:54.569 04:45:08 -- common/autotest_common.sh@930 -- # kill -0 40932 00:07:54.569 04:45:08 -- common/autotest_common.sh@931 -- # uname 00:07:54.569 04:45:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:54.569 04:45:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40932 00:07:54.569 04:45:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:54.569 04:45:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:54.569 killing process with pid 40932 00:07:54.569 04:45:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40932' 00:07:54.569 04:45:08 -- common/autotest_common.sh@945 -- # kill 40932 00:07:54.569 04:45:08 -- common/autotest_common.sh@950 -- # wait 40932 00:07:59.838 04:45:13 -- event/cpu_locks.sh@90 -- # killprocess 40976 00:07:59.838 04:45:13 -- common/autotest_common.sh@926 -- # '[' -z 40976 ']' 00:07:59.838 04:45:13 -- common/autotest_common.sh@930 -- # kill -0 40976 00:07:59.838 04:45:13 -- common/autotest_common.sh@931 -- # uname 00:07:59.838 04:45:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:59.838 04:45:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40976 00:07:59.838 killing process with pid 40976 00:07:59.838 04:45:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:59.838 04:45:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:59.838 04:45:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40976' 00:07:59.838 04:45:13 -- common/autotest_common.sh@945 -- # kill 40976 00:07:59.838 04:45:13 -- common/autotest_common.sh@950 -- # wait 40976 00:08:02.373 ************************************ 00:08:02.373 END TEST non_locking_app_on_locked_coremask 00:08:02.373 ************************************ 00:08:02.373 00:08:02.373 real 0m14.919s 00:08:02.373 user 0m15.763s 00:08:02.373 sys 0m2.670s 00:08:02.373 04:45:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.373 04:45:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.632 04:45:16 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:02.632 04:45:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:02.632 04:45:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.632 04:45:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.632 ************************************ 00:08:02.632 START TEST locking_app_on_unlocked_coremask 00:08:02.632 ************************************ 00:08:02.632 04:45:16 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:08:02.632 04:45:16 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=41173 00:08:02.632 04:45:16 -- event/cpu_locks.sh@99 -- # waitforlisten 41173 /var/tmp/spdk.sock 00:08:02.632 04:45:16 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:02.632 04:45:16 -- common/autotest_common.sh@819 -- # '[' -z 41173 ']' 00:08:02.632 04:45:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.632 04:45:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:02.632 04:45:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.632 04:45:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:02.632 04:45:16 -- common/autotest_common.sh@10 -- # set +x 00:08:02.632 [2024-05-15 04:45:16.763613] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:02.632 [2024-05-15 04:45:16.763979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41173 ] 00:08:02.891 [2024-05-15 04:45:16.918900] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:02.891 [2024-05-15 04:45:16.918958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.150 [2024-05-15 04:45:17.157091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:03.150 [2024-05-15 04:45:17.157327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.087 04:45:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:04.087 04:45:18 -- common/autotest_common.sh@852 -- # return 0 00:08:04.087 04:45:18 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=41208 00:08:04.087 04:45:18 -- event/cpu_locks.sh@103 -- # waitforlisten 41208 /var/tmp/spdk2.sock 00:08:04.087 04:45:18 -- common/autotest_common.sh@819 -- # '[' -z 41208 ']' 00:08:04.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.087 04:45:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.087 04:45:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:04.087 04:45:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.087 04:45:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:04.087 04:45:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.087 04:45:18 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:04.346 [2024-05-15 04:45:18.414982] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:04.346 [2024-05-15 04:45:18.415148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41208 ] 00:08:04.346 [2024-05-15 04:45:18.570158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.914 [2024-05-15 04:45:19.049674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.914 [2024-05-15 04:45:19.049880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.449 04:45:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:07.449 04:45:21 -- common/autotest_common.sh@852 -- # return 0 00:08:07.449 04:45:21 -- event/cpu_locks.sh@105 -- # locks_exist 41208 00:08:07.449 04:45:21 -- event/cpu_locks.sh@22 -- # lslocks -p 41208 00:08:07.449 04:45:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:08.826 04:45:22 -- event/cpu_locks.sh@107 -- # killprocess 41173 00:08:08.826 04:45:22 -- common/autotest_common.sh@926 -- # '[' -z 41173 ']' 00:08:08.826 04:45:22 -- common/autotest_common.sh@930 -- # kill -0 41173 00:08:08.826 04:45:22 -- common/autotest_common.sh@931 -- # uname 00:08:08.826 04:45:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:08.826 04:45:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41173 00:08:08.826 04:45:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:08.826 killing process with pid 41173 00:08:08.826 04:45:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:08.826 04:45:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41173' 00:08:08.827 04:45:22 -- common/autotest_common.sh@945 -- # kill 41173 00:08:08.827 04:45:22 -- common/autotest_common.sh@950 -- # wait 41173 00:08:14.170 04:45:28 -- event/cpu_locks.sh@108 -- # killprocess 41208 00:08:14.170 04:45:28 -- common/autotest_common.sh@926 -- # '[' -z 41208 ']' 00:08:14.170 04:45:28 -- common/autotest_common.sh@930 -- # kill -0 41208 00:08:14.170 04:45:28 -- common/autotest_common.sh@931 -- # uname 00:08:14.170 04:45:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:14.170 04:45:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41208 00:08:14.170 04:45:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:14.170 killing process with pid 41208 00:08:14.170 04:45:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:14.170 04:45:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41208' 00:08:14.170 04:45:28 -- common/autotest_common.sh@945 -- # kill 41208 00:08:14.170 04:45:28 -- common/autotest_common.sh@950 -- # wait 41208 00:08:16.773 ************************************ 00:08:16.773 END TEST locking_app_on_unlocked_coremask 00:08:16.773 ************************************ 00:08:16.773 00:08:16.773 real 0m14.150s 00:08:16.773 user 0m14.735s 00:08:16.773 sys 0m2.631s 00:08:16.773 04:45:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.773 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:08:16.773 04:45:30 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:16.773 04:45:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:16.773 04:45:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.773 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:08:16.773 ************************************ 00:08:16.773 START TEST locking_app_on_locked_coremask 00:08:16.773 ************************************ 00:08:16.773 04:45:30 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:08:16.773 04:45:30 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=41395 00:08:16.773 04:45:30 -- event/cpu_locks.sh@116 -- # waitforlisten 41395 /var/tmp/spdk.sock 00:08:16.773 04:45:30 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:16.773 04:45:30 -- common/autotest_common.sh@819 -- # '[' -z 41395 ']' 00:08:16.773 04:45:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.773 04:45:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:16.773 04:45:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.773 04:45:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:16.773 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:08:16.773 [2024-05-15 04:45:30.964189] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:16.773 [2024-05-15 04:45:30.964354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41395 ] 00:08:17.032 [2024-05-15 04:45:31.122629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.291 [2024-05-15 04:45:31.355402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.291 [2024-05-15 04:45:31.355586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.226 04:45:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:18.226 04:45:32 -- common/autotest_common.sh@852 -- # return 0 00:08:18.226 04:45:32 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=41430 00:08:18.226 04:45:32 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 41430 /var/tmp/spdk2.sock 00:08:18.226 04:45:32 -- common/autotest_common.sh@640 -- # local es=0 00:08:18.226 04:45:32 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 41430 /var/tmp/spdk2.sock 00:08:18.226 04:45:32 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:08:18.226 04:45:32 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:18.226 04:45:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:18.226 04:45:32 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:08:18.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:18.226 04:45:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:18.226 04:45:32 -- common/autotest_common.sh@643 -- # waitforlisten 41430 /var/tmp/spdk2.sock 00:08:18.226 04:45:32 -- common/autotest_common.sh@819 -- # '[' -z 41430 ']' 00:08:18.226 04:45:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:18.226 04:45:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:18.226 04:45:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:18.226 04:45:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:18.226 04:45:32 -- common/autotest_common.sh@10 -- # set +x 00:08:18.485 [2024-05-15 04:45:32.572698] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:18.485 [2024-05-15 04:45:32.572914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41430 ] 00:08:18.744 [2024-05-15 04:45:32.739376] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 41395 has claimed it. 00:08:18.744 [2024-05-15 04:45:32.739449] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:19.003 ERROR: process (pid: 41430) is no longer running 00:08:19.003 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (41430) - No such process 00:08:19.003 04:45:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:19.003 04:45:33 -- common/autotest_common.sh@852 -- # return 1 00:08:19.003 04:45:33 -- common/autotest_common.sh@643 -- # es=1 00:08:19.003 04:45:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:19.003 04:45:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:19.003 04:45:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:19.003 04:45:33 -- event/cpu_locks.sh@122 -- # locks_exist 41395 00:08:19.003 04:45:33 -- event/cpu_locks.sh@22 -- # lslocks -p 41395 00:08:19.003 04:45:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:19.940 04:45:33 -- event/cpu_locks.sh@124 -- # killprocess 41395 00:08:19.940 04:45:33 -- common/autotest_common.sh@926 -- # '[' -z 41395 ']' 00:08:19.940 04:45:33 -- common/autotest_common.sh@930 -- # kill -0 41395 00:08:19.940 04:45:33 -- common/autotest_common.sh@931 -- # uname 00:08:19.940 04:45:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:19.940 04:45:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41395 00:08:19.940 killing process with pid 41395 00:08:19.940 04:45:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:19.940 04:45:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:19.940 04:45:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41395' 00:08:19.940 04:45:34 -- common/autotest_common.sh@945 -- # kill 41395 00:08:19.940 04:45:34 -- common/autotest_common.sh@950 -- # wait 41395 00:08:22.471 ************************************ 00:08:22.471 END TEST locking_app_on_locked_coremask 00:08:22.471 ************************************ 00:08:22.471 00:08:22.471 real 0m5.786s 00:08:22.471 user 0m5.955s 00:08:22.471 sys 0m1.391s 00:08:22.471 04:45:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.471 04:45:36 -- common/autotest_common.sh@10 -- # set +x 00:08:22.471 04:45:36 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:22.471 04:45:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:22.471 04:45:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.471 04:45:36 -- common/autotest_common.sh@10 -- # set +x 00:08:22.471 ************************************ 00:08:22.471 START TEST locking_overlapped_coremask 00:08:22.471 ************************************ 00:08:22.471 04:45:36 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:08:22.471 04:45:36 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=41515 00:08:22.471 04:45:36 -- event/cpu_locks.sh@133 -- # waitforlisten 41515 /var/tmp/spdk.sock 00:08:22.471 04:45:36 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:22.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.471 04:45:36 -- common/autotest_common.sh@819 -- # '[' -z 41515 ']' 00:08:22.471 04:45:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.471 04:45:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:22.471 04:45:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.471 04:45:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:22.471 04:45:36 -- common/autotest_common.sh@10 -- # set +x 00:08:22.730 [2024-05-15 04:45:36.798960] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:22.730 [2024-05-15 04:45:36.799129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41515 ] 00:08:22.730 [2024-05-15 04:45:36.951207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.989 [2024-05-15 04:45:37.180911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.989 [2024-05-15 04:45:37.181272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.989 [2024-05-15 04:45:37.181578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.989 [2024-05-15 04:45:37.181571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.367 04:45:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:24.367 04:45:38 -- common/autotest_common.sh@852 -- # return 0 00:08:24.367 04:45:38 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=41540 00:08:24.367 04:45:38 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 41540 /var/tmp/spdk2.sock 00:08:24.367 04:45:38 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:24.367 04:45:38 -- common/autotest_common.sh@640 -- # local es=0 00:08:24.367 04:45:38 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 41540 /var/tmp/spdk2.sock 00:08:24.367 04:45:38 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:08:24.367 04:45:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:24.367 04:45:38 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:08:24.367 04:45:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:24.367 04:45:38 -- common/autotest_common.sh@643 -- # waitforlisten 41540 /var/tmp/spdk2.sock 00:08:24.367 04:45:38 -- common/autotest_common.sh@819 -- # '[' -z 41540 ']' 00:08:24.367 04:45:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:24.367 04:45:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:24.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:24.367 04:45:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:24.367 04:45:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:24.367 04:45:38 -- common/autotest_common.sh@10 -- # set +x 00:08:24.367 [2024-05-15 04:45:38.461386] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:24.367 [2024-05-15 04:45:38.461553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41540 ] 00:08:24.626 [2024-05-15 04:45:38.669295] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 41515 has claimed it. 00:08:24.626 [2024-05-15 04:45:38.669366] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:24.885 ERROR: process (pid: 41540) is no longer running 00:08:24.885 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (41540) - No such process 00:08:24.885 04:45:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:24.885 04:45:39 -- common/autotest_common.sh@852 -- # return 1 00:08:24.885 04:45:39 -- common/autotest_common.sh@643 -- # es=1 00:08:24.885 04:45:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:24.885 04:45:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:24.885 04:45:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:24.885 04:45:39 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:24.885 04:45:39 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:24.885 04:45:39 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:24.885 04:45:39 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:24.885 04:45:39 -- event/cpu_locks.sh@141 -- # killprocess 41515 00:08:24.885 04:45:39 -- common/autotest_common.sh@926 -- # '[' -z 41515 ']' 00:08:24.885 04:45:39 -- common/autotest_common.sh@930 -- # kill -0 41515 00:08:24.885 04:45:39 -- common/autotest_common.sh@931 -- # uname 00:08:24.885 04:45:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:24.885 04:45:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41515 00:08:24.885 killing process with pid 41515 00:08:24.885 04:45:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:24.885 04:45:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:24.885 04:45:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41515' 00:08:24.885 04:45:39 -- common/autotest_common.sh@945 -- # kill 41515 00:08:24.885 04:45:39 -- common/autotest_common.sh@950 -- # wait 41515 00:08:28.173 ************************************ 00:08:28.173 END TEST locking_overlapped_coremask 00:08:28.173 ************************************ 00:08:28.173 00:08:28.173 real 0m5.079s 00:08:28.173 user 0m13.259s 00:08:28.173 sys 0m0.747s 00:08:28.173 04:45:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.173 04:45:41 -- common/autotest_common.sh@10 -- # set +x 00:08:28.173 04:45:41 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:28.173 04:45:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:28.173 04:45:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.173 04:45:41 -- common/autotest_common.sh@10 -- # set +x 00:08:28.173 ************************************ 00:08:28.173 START TEST locking_overlapped_coremask_via_rpc 00:08:28.173 ************************************ 00:08:28.173 04:45:41 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:08:28.173 04:45:41 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=41619 00:08:28.173 04:45:41 -- event/cpu_locks.sh@149 -- # waitforlisten 41619 /var/tmp/spdk.sock 00:08:28.173 04:45:41 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:28.173 04:45:41 -- common/autotest_common.sh@819 -- # '[' -z 41619 ']' 00:08:28.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.173 04:45:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.173 04:45:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:28.173 04:45:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.173 04:45:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:28.173 04:45:41 -- common/autotest_common.sh@10 -- # set +x 00:08:28.173 [2024-05-15 04:45:41.964398] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:28.173 [2024-05-15 04:45:41.964572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41619 ] 00:08:28.173 [2024-05-15 04:45:42.144244] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:28.173 [2024-05-15 04:45:42.144341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.173 [2024-05-15 04:45:42.371691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:28.173 [2024-05-15 04:45:42.372070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.173 [2024-05-15 04:45:42.372359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.173 [2024-05-15 04:45:42.372349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.556 04:45:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:29.556 04:45:43 -- common/autotest_common.sh@852 -- # return 0 00:08:29.556 04:45:43 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=41646 00:08:29.556 04:45:43 -- event/cpu_locks.sh@153 -- # waitforlisten 41646 /var/tmp/spdk2.sock 00:08:29.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:29.556 04:45:43 -- common/autotest_common.sh@819 -- # '[' -z 41646 ']' 00:08:29.556 04:45:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:29.556 04:45:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:29.556 04:45:43 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:29.556 04:45:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:29.556 04:45:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:29.556 04:45:43 -- common/autotest_common.sh@10 -- # set +x 00:08:29.556 [2024-05-15 04:45:43.634849] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:29.556 [2024-05-15 04:45:43.635019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41646 ] 00:08:29.815 [2024-05-15 04:45:43.845455] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:29.815 [2024-05-15 04:45:43.845517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.381 [2024-05-15 04:45:44.349608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.381 [2024-05-15 04:45:44.350003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.381 [2024-05-15 04:45:44.360873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:30.381 [2024-05-15 04:45:44.371722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.937 04:45:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:32.937 04:45:47 -- common/autotest_common.sh@852 -- # return 0 00:08:32.937 04:45:47 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:32.937 04:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.937 04:45:47 -- common/autotest_common.sh@10 -- # set +x 00:08:32.937 04:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.937 04:45:47 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:32.937 04:45:47 -- common/autotest_common.sh@640 -- # local es=0 00:08:32.937 04:45:47 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:32.937 04:45:47 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:08:32.938 04:45:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:32.938 04:45:47 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:08:32.938 04:45:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:32.938 04:45:47 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:32.938 04:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.938 04:45:47 -- common/autotest_common.sh@10 -- # set +x 00:08:32.938 [2024-05-15 04:45:47.029911] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 41619 has claimed it. 00:08:32.938 request: 00:08:32.938 { 00:08:32.938 "method": "framework_enable_cpumask_locks", 00:08:32.938 "req_id": 1 00:08:32.938 } 00:08:32.938 Got JSON-RPC error response 00:08:32.938 response: 00:08:32.938 { 00:08:32.938 "code": -32603, 00:08:32.938 "message": "Failed to claim CPU core: 2" 00:08:32.938 } 00:08:32.938 04:45:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:08:32.938 04:45:47 -- common/autotest_common.sh@643 -- # es=1 00:08:32.938 04:45:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:32.938 04:45:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:32.938 04:45:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:32.938 04:45:47 -- event/cpu_locks.sh@158 -- # waitforlisten 41619 /var/tmp/spdk.sock 00:08:32.938 04:45:47 -- common/autotest_common.sh@819 -- # '[' -z 41619 ']' 00:08:32.938 04:45:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.938 04:45:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:32.938 04:45:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.938 04:45:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:32.938 04:45:47 -- common/autotest_common.sh@10 -- # set +x 00:08:33.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:33.197 04:45:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:33.197 04:45:47 -- common/autotest_common.sh@852 -- # return 0 00:08:33.197 04:45:47 -- event/cpu_locks.sh@159 -- # waitforlisten 41646 /var/tmp/spdk2.sock 00:08:33.197 04:45:47 -- common/autotest_common.sh@819 -- # '[' -z 41646 ']' 00:08:33.197 04:45:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:33.197 04:45:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:33.197 04:45:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:33.197 04:45:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:33.197 04:45:47 -- common/autotest_common.sh@10 -- # set +x 00:08:33.197 ************************************ 00:08:33.197 END TEST locking_overlapped_coremask_via_rpc 00:08:33.197 ************************************ 00:08:33.197 04:45:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:33.197 04:45:47 -- common/autotest_common.sh@852 -- # return 0 00:08:33.197 04:45:47 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:33.197 04:45:47 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:33.197 04:45:47 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:33.197 04:45:47 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:33.197 00:08:33.197 real 0m5.596s 00:08:33.197 user 0m1.779s 00:08:33.197 sys 0m0.281s 00:08:33.197 04:45:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.197 04:45:47 -- common/autotest_common.sh@10 -- # set +x 00:08:33.455 04:45:47 -- event/cpu_locks.sh@174 -- # cleanup 00:08:33.455 04:45:47 -- event/cpu_locks.sh@15 -- # [[ -z 41619 ]] 00:08:33.455 04:45:47 -- event/cpu_locks.sh@15 -- # killprocess 41619 00:08:33.455 04:45:47 -- common/autotest_common.sh@926 -- # '[' -z 41619 ']' 00:08:33.455 04:45:47 -- common/autotest_common.sh@930 -- # kill -0 41619 00:08:33.455 04:45:47 -- common/autotest_common.sh@931 -- # uname 00:08:33.455 04:45:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:33.455 04:45:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41619 00:08:33.455 killing process with pid 41619 00:08:33.455 04:45:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:33.455 04:45:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:33.455 04:45:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41619' 00:08:33.455 04:45:47 -- common/autotest_common.sh@945 -- # kill 41619 00:08:33.455 04:45:47 -- common/autotest_common.sh@950 -- # wait 41619 00:08:36.741 04:45:50 -- event/cpu_locks.sh@16 -- # [[ -z 41646 ]] 00:08:36.741 04:45:50 -- event/cpu_locks.sh@16 -- # killprocess 41646 00:08:36.741 04:45:50 -- common/autotest_common.sh@926 -- # '[' -z 41646 ']' 00:08:36.741 04:45:50 -- common/autotest_common.sh@930 -- # kill -0 41646 00:08:36.741 04:45:50 -- common/autotest_common.sh@931 -- # uname 00:08:36.741 04:45:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:36.742 04:45:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41646 00:08:36.742 killing process with pid 41646 00:08:36.742 04:45:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:08:36.742 04:45:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:08:36.742 04:45:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41646' 00:08:36.742 04:45:50 -- common/autotest_common.sh@945 -- # kill 41646 00:08:36.742 04:45:50 -- common/autotest_common.sh@950 -- # wait 41646 00:08:39.272 04:45:52 -- event/cpu_locks.sh@18 -- # rm -f 00:08:39.272 Process with pid 41619 is not found 00:08:39.272 Process with pid 41646 is not found 00:08:39.272 04:45:52 -- event/cpu_locks.sh@1 -- # cleanup 00:08:39.272 04:45:52 -- event/cpu_locks.sh@15 -- # [[ -z 41619 ]] 00:08:39.272 04:45:52 -- event/cpu_locks.sh@15 -- # killprocess 41619 00:08:39.272 04:45:52 -- common/autotest_common.sh@926 -- # '[' -z 41619 ']' 00:08:39.272 04:45:52 -- common/autotest_common.sh@930 -- # kill -0 41619 00:08:39.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (41619) - No such process 00:08:39.272 04:45:52 -- common/autotest_common.sh@953 -- # echo 'Process with pid 41619 is not found' 00:08:39.272 04:45:52 -- event/cpu_locks.sh@16 -- # [[ -z 41646 ]] 00:08:39.272 04:45:52 -- event/cpu_locks.sh@16 -- # killprocess 41646 00:08:39.272 04:45:52 -- common/autotest_common.sh@926 -- # '[' -z 41646 ']' 00:08:39.272 04:45:52 -- common/autotest_common.sh@930 -- # kill -0 41646 00:08:39.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (41646) - No such process 00:08:39.272 04:45:52 -- common/autotest_common.sh@953 -- # echo 'Process with pid 41646 is not found' 00:08:39.272 04:45:52 -- event/cpu_locks.sh@18 -- # rm -f 00:08:39.272 ************************************ 00:08:39.272 END TEST cpu_locks 00:08:39.272 ************************************ 00:08:39.272 00:08:39.272 real 1m2.482s 00:08:39.272 user 1m44.196s 00:08:39.272 sys 0m11.729s 00:08:39.272 04:45:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.272 04:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:39.272 ************************************ 00:08:39.272 END TEST event 00:08:39.272 ************************************ 00:08:39.272 00:08:39.272 real 1m13.169s 00:08:39.272 user 2m0.049s 00:08:39.272 sys 0m12.945s 00:08:39.272 04:45:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.272 04:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:39.272 04:45:53 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:39.272 04:45:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:39.272 04:45:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.272 04:45:53 -- common/autotest_common.sh@10 -- # set +x 00:08:39.272 ************************************ 00:08:39.272 START TEST thread 00:08:39.272 ************************************ 00:08:39.272 04:45:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:39.272 * Looking for test storage... 00:08:39.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:39.272 04:45:53 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:39.272 04:45:53 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:39.272 04:45:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.272 04:45:53 -- common/autotest_common.sh@10 -- # set +x 00:08:39.272 ************************************ 00:08:39.272 START TEST thread_poller_perf 00:08:39.272 ************************************ 00:08:39.272 04:45:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:39.272 [2024-05-15 04:45:53.166989] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:39.272 [2024-05-15 04:45:53.167161] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41884 ] 00:08:39.272 [2024-05-15 04:45:53.350089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.529 [2024-05-15 04:45:53.554480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.529 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:40.905 ====================================== 00:08:40.905 busy:2106996094 (cyc) 00:08:40.905 total_run_count: 1648000 00:08:40.905 tsc_hz: 2100000000 (cyc) 00:08:40.905 ====================================== 00:08:40.905 poller_cost: 1278 (cyc), 608 (nsec) 00:08:40.905 ************************************ 00:08:40.905 END TEST thread_poller_perf 00:08:40.905 ************************************ 00:08:40.905 00:08:40.905 real 0m1.824s 00:08:40.905 user 0m1.578s 00:08:40.905 sys 0m0.146s 00:08:40.905 04:45:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.905 04:45:54 -- common/autotest_common.sh@10 -- # set +x 00:08:40.905 04:45:54 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:40.905 04:45:54 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:40.905 04:45:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.905 04:45:54 -- common/autotest_common.sh@10 -- # set +x 00:08:40.905 ************************************ 00:08:40.905 START TEST thread_poller_perf 00:08:40.905 ************************************ 00:08:40.906 04:45:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:40.906 [2024-05-15 04:45:55.046506] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:40.906 [2024-05-15 04:45:55.046914] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41934 ] 00:08:41.164 [2024-05-15 04:45:55.225463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.423 [2024-05-15 04:45:55.426579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.423 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:42.799 ====================================== 00:08:42.799 busy:2104898988 (cyc) 00:08:42.799 total_run_count: 17836000 00:08:42.799 tsc_hz: 2100000000 (cyc) 00:08:42.799 ====================================== 00:08:42.799 poller_cost: 118 (cyc), 56 (nsec) 00:08:42.799 ************************************ 00:08:42.799 END TEST thread_poller_perf 00:08:42.799 ************************************ 00:08:42.799 00:08:42.799 real 0m1.820s 00:08:42.799 user 0m1.582s 00:08:42.799 sys 0m0.136s 00:08:42.799 04:45:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.799 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:08:42.800 04:45:56 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:08:42.800 04:45:56 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:08:42.800 04:45:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:42.800 04:45:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.800 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:08:42.800 ************************************ 00:08:42.800 START TEST thread_spdk_lock 00:08:42.800 ************************************ 00:08:42.800 04:45:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:08:42.800 [2024-05-15 04:45:56.916828] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:42.800 [2024-05-15 04:45:56.917049] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41979 ] 00:08:43.058 [2024-05-15 04:45:57.094966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.317 [2024-05-15 04:45:57.320540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.317 [2024-05-15 04:45:57.320543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.576 [2024-05-15 04:45:57.801232] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:43.576 [2024-05-15 04:45:57.801311] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:43.576 [2024-05-15 04:45:57.801353] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0xc31840 00:08:43.835 [2024-05-15 04:45:57.810788] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:43.835 [2024-05-15 04:45:57.810888] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:43.835 [2024-05-15 04:45:57.810921] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:44.093 Starting test contend 00:08:44.094 Worker Delay Wait us Hold us Total us 00:08:44.094 0 3 187754 178957 366712 00:08:44.094 1 5 98556 279914 378470 00:08:44.094 PASS test contend 00:08:44.094 Starting test hold_by_poller 00:08:44.094 PASS test hold_by_poller 00:08:44.094 Starting test hold_by_message 00:08:44.094 PASS test hold_by_message 00:08:44.094 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:08:44.094 100014 assertions passed 00:08:44.094 0 assertions failed 00:08:44.094 ************************************ 00:08:44.094 END TEST thread_spdk_lock 00:08:44.094 ************************************ 00:08:44.094 00:08:44.094 real 0m1.354s 00:08:44.094 user 0m1.602s 00:08:44.094 sys 0m0.143s 00:08:44.094 04:45:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.094 04:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:44.094 ************************************ 00:08:44.094 END TEST thread 00:08:44.094 ************************************ 00:08:44.094 00:08:44.094 real 0m5.236s 00:08:44.094 user 0m4.857s 00:08:44.094 sys 0m0.563s 00:08:44.094 04:45:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.094 04:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:44.094 04:45:58 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:44.094 04:45:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:44.094 04:45:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.094 04:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:44.094 ************************************ 00:08:44.094 START TEST accel 00:08:44.094 ************************************ 00:08:44.094 04:45:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:44.353 * Looking for test storage... 00:08:44.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:44.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.353 04:45:58 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:08:44.353 04:45:58 -- accel/accel.sh@74 -- # get_expected_opcs 00:08:44.353 04:45:58 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:44.353 04:45:58 -- accel/accel.sh@59 -- # spdk_tgt_pid=42071 00:08:44.353 04:45:58 -- accel/accel.sh@60 -- # waitforlisten 42071 00:08:44.353 04:45:58 -- common/autotest_common.sh@819 -- # '[' -z 42071 ']' 00:08:44.353 04:45:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.353 04:45:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:44.353 04:45:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.353 04:45:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:44.353 04:45:58 -- common/autotest_common.sh@10 -- # set +x 00:08:44.353 04:45:58 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:44.353 04:45:58 -- accel/accel.sh@58 -- # build_accel_config 00:08:44.353 04:45:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:44.353 04:45:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.353 04:45:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.353 04:45:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:44.353 04:45:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:44.353 04:45:58 -- accel/accel.sh@41 -- # local IFS=, 00:08:44.353 04:45:58 -- accel/accel.sh@42 -- # jq -r . 00:08:44.353 [2024-05-15 04:45:58.556530] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:44.353 [2024-05-15 04:45:58.556951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42071 ] 00:08:44.611 [2024-05-15 04:45:58.714336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.870 [2024-05-15 04:45:58.947247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.870 [2024-05-15 04:45:58.947434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.807 04:45:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:45.807 04:45:59 -- common/autotest_common.sh@852 -- # return 0 00:08:45.807 04:45:59 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:45.807 04:45:59 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:45.807 04:45:59 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:08:45.807 04:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.807 04:45:59 -- common/autotest_common.sh@10 -- # set +x 00:08:45.807 04:46:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.065 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.065 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.065 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.066 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.066 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.066 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.066 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.066 04:46:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:46.066 04:46:00 -- accel/accel.sh@64 -- # IFS== 00:08:46.066 04:46:00 -- accel/accel.sh@64 -- # read -r opc module 00:08:46.066 04:46:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:46.066 04:46:00 -- accel/accel.sh@67 -- # killprocess 42071 00:08:46.066 04:46:00 -- common/autotest_common.sh@926 -- # '[' -z 42071 ']' 00:08:46.066 04:46:00 -- common/autotest_common.sh@930 -- # kill -0 42071 00:08:46.066 04:46:00 -- common/autotest_common.sh@931 -- # uname 00:08:46.066 04:46:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:46.066 04:46:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 42071 00:08:46.066 killing process with pid 42071 00:08:46.066 04:46:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:46.066 04:46:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:46.066 04:46:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 42071' 00:08:46.066 04:46:00 -- common/autotest_common.sh@945 -- # kill 42071 00:08:46.066 04:46:00 -- common/autotest_common.sh@950 -- # wait 42071 00:08:48.597 04:46:02 -- accel/accel.sh@68 -- # trap - ERR 00:08:48.597 04:46:02 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:08:48.597 04:46:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:48.597 04:46:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.597 04:46:02 -- common/autotest_common.sh@10 -- # set +x 00:08:48.597 04:46:02 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:08:48.597 04:46:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:48.597 04:46:02 -- accel/accel.sh@12 -- # build_accel_config 00:08:48.597 04:46:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:48.597 04:46:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.597 04:46:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.597 04:46:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:48.597 04:46:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:48.597 04:46:02 -- accel/accel.sh@41 -- # local IFS=, 00:08:48.597 04:46:02 -- accel/accel.sh@42 -- # jq -r . 00:08:48.856 04:46:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.856 04:46:02 -- common/autotest_common.sh@10 -- # set +x 00:08:48.856 04:46:02 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:48.856 04:46:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:48.856 04:46:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.856 04:46:02 -- common/autotest_common.sh@10 -- # set +x 00:08:48.856 ************************************ 00:08:48.856 START TEST accel_missing_filename 00:08:48.856 ************************************ 00:08:48.856 04:46:02 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:08:48.856 04:46:02 -- common/autotest_common.sh@640 -- # local es=0 00:08:48.856 04:46:02 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:48.856 04:46:02 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:48.856 04:46:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:48.856 04:46:02 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:48.856 04:46:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:48.856 04:46:02 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:08:48.856 04:46:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:48.856 04:46:02 -- accel/accel.sh@12 -- # build_accel_config 00:08:48.856 04:46:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:48.856 04:46:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.856 04:46:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.856 04:46:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:48.856 04:46:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:48.856 04:46:02 -- accel/accel.sh@41 -- # local IFS=, 00:08:48.856 04:46:02 -- accel/accel.sh@42 -- # jq -r . 00:08:48.856 [2024-05-15 04:46:03.037432] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:48.856 [2024-05-15 04:46:03.037598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42167 ] 00:08:49.113 [2024-05-15 04:46:03.191526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.370 [2024-05-15 04:46:03.418316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.628 [2024-05-15 04:46:03.660759] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.194 [2024-05-15 04:46:04.204198] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:08:50.453 A filename is required. 00:08:50.453 04:46:04 -- common/autotest_common.sh@643 -- # es=234 00:08:50.453 04:46:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:50.453 04:46:04 -- common/autotest_common.sh@652 -- # es=106 00:08:50.453 ************************************ 00:08:50.453 END TEST accel_missing_filename 00:08:50.453 ************************************ 00:08:50.453 04:46:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:50.453 04:46:04 -- common/autotest_common.sh@660 -- # es=1 00:08:50.453 04:46:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:50.453 00:08:50.453 real 0m1.737s 00:08:50.453 user 0m1.335s 00:08:50.453 sys 0m0.254s 00:08:50.453 04:46:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.453 04:46:04 -- common/autotest_common.sh@10 -- # set +x 00:08:50.453 04:46:04 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:50.453 04:46:04 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:08:50.453 04:46:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.453 04:46:04 -- common/autotest_common.sh@10 -- # set +x 00:08:50.711 ************************************ 00:08:50.711 START TEST accel_compress_verify 00:08:50.711 ************************************ 00:08:50.711 04:46:04 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:50.711 04:46:04 -- common/autotest_common.sh@640 -- # local es=0 00:08:50.711 04:46:04 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:50.711 04:46:04 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:50.711 04:46:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:50.711 04:46:04 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:50.711 04:46:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:50.711 04:46:04 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:50.711 04:46:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:50.711 04:46:04 -- accel/accel.sh@12 -- # build_accel_config 00:08:50.711 04:46:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:50.711 04:46:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:50.711 04:46:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.711 04:46:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:50.711 04:46:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:50.711 04:46:04 -- accel/accel.sh@41 -- # local IFS=, 00:08:50.711 04:46:04 -- accel/accel.sh@42 -- # jq -r . 00:08:50.711 [2024-05-15 04:46:04.829501] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:50.711 [2024-05-15 04:46:04.829662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42218 ] 00:08:50.971 [2024-05-15 04:46:04.993191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.229 [2024-05-15 04:46:05.237683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.487 [2024-05-15 04:46:05.491428] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.053 [2024-05-15 04:46:06.042335] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:08:52.312 00:08:52.312 Compression does not support the verify option, aborting. 00:08:52.312 ************************************ 00:08:52.312 END TEST accel_compress_verify 00:08:52.312 ************************************ 00:08:52.312 04:46:06 -- common/autotest_common.sh@643 -- # es=161 00:08:52.312 04:46:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:52.312 04:46:06 -- common/autotest_common.sh@652 -- # es=33 00:08:52.312 04:46:06 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:52.312 04:46:06 -- common/autotest_common.sh@660 -- # es=1 00:08:52.312 04:46:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:52.312 00:08:52.312 real 0m1.799s 00:08:52.312 user 0m1.416s 00:08:52.312 sys 0m0.241s 00:08:52.312 04:46:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.312 04:46:06 -- common/autotest_common.sh@10 -- # set +x 00:08:52.312 04:46:06 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:52.312 04:46:06 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:52.312 04:46:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.312 04:46:06 -- common/autotest_common.sh@10 -- # set +x 00:08:52.312 ************************************ 00:08:52.312 START TEST accel_wrong_workload 00:08:52.312 ************************************ 00:08:52.312 04:46:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:08:52.312 04:46:06 -- common/autotest_common.sh@640 -- # local es=0 00:08:52.312 04:46:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:52.312 04:46:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:52.312 04:46:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:52.312 04:46:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:52.312 04:46:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:52.312 04:46:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:08:52.312 04:46:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:52.312 04:46:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:52.312 04:46:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:52.312 04:46:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.312 04:46:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.312 04:46:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:52.312 04:46:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:52.312 04:46:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:52.312 04:46:06 -- accel/accel.sh@42 -- # jq -r . 00:08:52.570 Unsupported workload type: foobar 00:08:52.570 [2024-05-15 04:46:06.679836] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:52.570 accel_perf options: 00:08:52.570 [-h help message] 00:08:52.570 [-q queue depth per core] 00:08:52.570 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:52.570 [-T number of threads per core 00:08:52.570 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:52.571 [-t time in seconds] 00:08:52.571 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:52.571 [ dif_verify, , dif_generate, dif_generate_copy 00:08:52.571 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:52.571 [-l for compress/decompress workloads, name of uncompressed input file 00:08:52.571 [-S for crc32c workload, use this seed value (default 0) 00:08:52.571 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:52.571 [-f for fill workload, use this BYTE value (default 255) 00:08:52.571 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:52.571 [-y verify result if this switch is on] 00:08:52.571 [-a tasks to allocate per core (default: same value as -q)] 00:08:52.571 Can be used to spread operations across a wider range of memory. 00:08:52.571 ************************************ 00:08:52.571 END TEST accel_wrong_workload 00:08:52.571 ************************************ 00:08:52.571 04:46:06 -- common/autotest_common.sh@643 -- # es=1 00:08:52.571 04:46:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:52.571 04:46:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:52.571 04:46:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:52.571 00:08:52.571 real 0m0.168s 00:08:52.571 user 0m0.087s 00:08:52.571 sys 0m0.043s 00:08:52.571 04:46:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.571 04:46:06 -- common/autotest_common.sh@10 -- # set +x 00:08:52.571 04:46:06 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:52.571 04:46:06 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:08:52.571 04:46:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.571 04:46:06 -- common/autotest_common.sh@10 -- # set +x 00:08:52.571 ************************************ 00:08:52.571 START TEST accel_negative_buffers 00:08:52.571 ************************************ 00:08:52.571 04:46:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:52.571 04:46:06 -- common/autotest_common.sh@640 -- # local es=0 00:08:52.571 04:46:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:52.571 04:46:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:52.571 04:46:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:52.571 04:46:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:52.571 04:46:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:52.571 04:46:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:08:52.571 04:46:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:52.571 04:46:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:52.571 04:46:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:52.571 04:46:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.571 04:46:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.571 04:46:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:52.571 04:46:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:52.571 04:46:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:52.571 04:46:06 -- accel/accel.sh@42 -- # jq -r . 00:08:52.829 -x option must be non-negative. 00:08:52.829 [2024-05-15 04:46:06.894664] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:52.829 accel_perf options: 00:08:52.829 [-h help message] 00:08:52.829 [-q queue depth per core] 00:08:52.829 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:52.829 [-T number of threads per core 00:08:52.829 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:52.829 [-t time in seconds] 00:08:52.829 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:52.829 [ dif_verify, , dif_generate, dif_generate_copy 00:08:52.829 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:52.829 [-l for compress/decompress workloads, name of uncompressed input file 00:08:52.829 [-S for crc32c workload, use this seed value (default 0) 00:08:52.829 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:52.829 [-f for fill workload, use this BYTE value (default 255) 00:08:52.829 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:52.829 [-y verify result if this switch is on] 00:08:52.829 [-a tasks to allocate per core (default: same value as -q)] 00:08:52.829 Can be used to spread operations across a wider range of memory. 00:08:52.829 04:46:06 -- common/autotest_common.sh@643 -- # es=1 00:08:52.829 04:46:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:52.829 04:46:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:52.829 04:46:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:52.829 00:08:52.829 real 0m0.161s 00:08:52.829 user 0m0.082s 00:08:52.829 sys 0m0.039s 00:08:52.829 04:46:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.829 04:46:06 -- common/autotest_common.sh@10 -- # set +x 00:08:52.829 ************************************ 00:08:52.829 END TEST accel_negative_buffers 00:08:52.829 ************************************ 00:08:52.829 04:46:06 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:52.829 04:46:06 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:52.829 04:46:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.829 04:46:06 -- common/autotest_common.sh@10 -- # set +x 00:08:52.829 ************************************ 00:08:52.829 START TEST accel_crc32c 00:08:52.829 ************************************ 00:08:52.829 04:46:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:52.829 04:46:06 -- accel/accel.sh@16 -- # local accel_opc 00:08:52.829 04:46:06 -- accel/accel.sh@17 -- # local accel_module 00:08:52.829 04:46:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:52.829 04:46:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:52.829 04:46:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:52.829 04:46:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:52.829 04:46:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.829 04:46:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.829 04:46:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:52.829 04:46:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:52.829 04:46:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:52.829 04:46:06 -- accel/accel.sh@42 -- # jq -r . 00:08:53.087 [2024-05-15 04:46:07.107371] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:53.087 [2024-05-15 04:46:07.107523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42318 ] 00:08:53.087 [2024-05-15 04:46:07.261597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.345 [2024-05-15 04:46:07.502024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.878 04:46:09 -- accel/accel.sh@18 -- # out=' 00:08:55.878 SPDK Configuration: 00:08:55.878 Core mask: 0x1 00:08:55.878 00:08:55.878 Accel Perf Configuration: 00:08:55.878 Workload Type: crc32c 00:08:55.878 CRC-32C seed: 32 00:08:55.878 Transfer size: 4096 bytes 00:08:55.878 Vector count 1 00:08:55.878 Module: software 00:08:55.878 Queue depth: 32 00:08:55.878 Allocate depth: 32 00:08:55.878 # threads/core: 1 00:08:55.878 Run time: 1 seconds 00:08:55.879 Verify: Yes 00:08:55.879 00:08:55.879 Running for 1 seconds... 00:08:55.879 00:08:55.879 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:55.879 ------------------------------------------------------------------------------------ 00:08:55.879 0,0 120736/s 471 MiB/s 0 0 00:08:55.879 ==================================================================================== 00:08:55.879 Total 120736/s 471 MiB/s 0 0' 00:08:55.879 04:46:09 -- accel/accel.sh@20 -- # IFS=: 00:08:55.879 04:46:09 -- accel/accel.sh@20 -- # read -r var val 00:08:55.879 04:46:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:55.879 04:46:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:55.879 04:46:09 -- accel/accel.sh@12 -- # build_accel_config 00:08:55.879 04:46:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:55.879 04:46:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:55.879 04:46:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:55.879 04:46:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:55.879 04:46:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:55.879 04:46:09 -- accel/accel.sh@41 -- # local IFS=, 00:08:55.879 04:46:09 -- accel/accel.sh@42 -- # jq -r . 00:08:55.879 [2024-05-15 04:46:09.913156] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:55.879 [2024-05-15 04:46:09.913323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42359 ] 00:08:55.879 [2024-05-15 04:46:10.082676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.137 [2024-05-15 04:46:10.327960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val= 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val= 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val=0x1 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val= 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val= 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val=crc32c 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val=32 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val= 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val=software 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@23 -- # accel_module=software 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val=32 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val=32 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val=1 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val=Yes 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val= 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:56.397 04:46:10 -- accel/accel.sh@21 -- # val= 00:08:56.397 04:46:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # IFS=: 00:08:56.397 04:46:10 -- accel/accel.sh@20 -- # read -r var val 00:08:58.929 04:46:12 -- accel/accel.sh@21 -- # val= 00:08:58.929 04:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # IFS=: 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # read -r var val 00:08:58.929 04:46:12 -- accel/accel.sh@21 -- # val= 00:08:58.929 04:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # IFS=: 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # read -r var val 00:08:58.929 04:46:12 -- accel/accel.sh@21 -- # val= 00:08:58.929 04:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # IFS=: 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # read -r var val 00:08:58.929 04:46:12 -- accel/accel.sh@21 -- # val= 00:08:58.929 04:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # IFS=: 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # read -r var val 00:08:58.929 04:46:12 -- accel/accel.sh@21 -- # val= 00:08:58.929 04:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # IFS=: 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # read -r var val 00:08:58.929 04:46:12 -- accel/accel.sh@21 -- # val= 00:08:58.929 04:46:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # IFS=: 00:08:58.929 04:46:12 -- accel/accel.sh@20 -- # read -r var val 00:08:58.929 ************************************ 00:08:58.929 END TEST accel_crc32c 00:08:58.929 ************************************ 00:08:58.929 04:46:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:58.929 04:46:12 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:08:58.929 04:46:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:58.929 00:08:58.929 real 0m5.621s 00:08:58.929 user 0m4.807s 00:08:58.929 sys 0m0.526s 00:08:58.929 04:46:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.929 04:46:12 -- common/autotest_common.sh@10 -- # set +x 00:08:58.929 04:46:12 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:58.929 04:46:12 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:58.929 04:46:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.929 04:46:12 -- common/autotest_common.sh@10 -- # set +x 00:08:58.929 ************************************ 00:08:58.929 START TEST accel_crc32c_C2 00:08:58.929 ************************************ 00:08:58.929 04:46:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:58.929 04:46:12 -- accel/accel.sh@16 -- # local accel_opc 00:08:58.929 04:46:12 -- accel/accel.sh@17 -- # local accel_module 00:08:58.929 04:46:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:58.929 04:46:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:58.929 04:46:12 -- accel/accel.sh@12 -- # build_accel_config 00:08:58.929 04:46:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:58.929 04:46:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.929 04:46:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.929 04:46:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:58.929 04:46:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:58.929 04:46:12 -- accel/accel.sh@41 -- # local IFS=, 00:08:58.929 04:46:12 -- accel/accel.sh@42 -- # jq -r . 00:08:58.929 [2024-05-15 04:46:12.776387] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:58.929 [2024-05-15 04:46:12.776545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42418 ] 00:08:58.929 [2024-05-15 04:46:12.933238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.187 [2024-05-15 04:46:13.175645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.720 04:46:15 -- accel/accel.sh@18 -- # out=' 00:09:01.720 SPDK Configuration: 00:09:01.720 Core mask: 0x1 00:09:01.720 00:09:01.720 Accel Perf Configuration: 00:09:01.720 Workload Type: crc32c 00:09:01.720 CRC-32C seed: 0 00:09:01.720 Transfer size: 4096 bytes 00:09:01.720 Vector count 2 00:09:01.720 Module: software 00:09:01.720 Queue depth: 32 00:09:01.720 Allocate depth: 32 00:09:01.720 # threads/core: 1 00:09:01.720 Run time: 1 seconds 00:09:01.720 Verify: Yes 00:09:01.720 00:09:01.720 Running for 1 seconds... 00:09:01.720 00:09:01.720 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:01.720 ------------------------------------------------------------------------------------ 00:09:01.720 0,0 63008/s 492 MiB/s 0 0 00:09:01.720 ==================================================================================== 00:09:01.720 Total 63008/s 246 MiB/s 0 0' 00:09:01.720 04:46:15 -- accel/accel.sh@20 -- # IFS=: 00:09:01.720 04:46:15 -- accel/accel.sh@20 -- # read -r var val 00:09:01.720 04:46:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:01.720 04:46:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:01.720 04:46:15 -- accel/accel.sh@12 -- # build_accel_config 00:09:01.720 04:46:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:01.720 04:46:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.720 04:46:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.720 04:46:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:01.720 04:46:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:01.720 04:46:15 -- accel/accel.sh@41 -- # local IFS=, 00:09:01.720 04:46:15 -- accel/accel.sh@42 -- # jq -r . 00:09:01.720 [2024-05-15 04:46:15.580162] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:01.720 [2024-05-15 04:46:15.580319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42463 ] 00:09:01.720 [2024-05-15 04:46:15.734691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.979 [2024-05-15 04:46:15.970630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val= 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val= 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val=0x1 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val= 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val= 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val=crc32c 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val=0 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val= 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val=software 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@23 -- # accel_module=software 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val=32 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val=32 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val=1 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val=Yes 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val= 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:02.238 04:46:16 -- accel/accel.sh@21 -- # val= 00:09:02.238 04:46:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # IFS=: 00:09:02.238 04:46:16 -- accel/accel.sh@20 -- # read -r var val 00:09:04.143 04:46:18 -- accel/accel.sh@21 -- # val= 00:09:04.143 04:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # IFS=: 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # read -r var val 00:09:04.143 04:46:18 -- accel/accel.sh@21 -- # val= 00:09:04.143 04:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # IFS=: 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # read -r var val 00:09:04.143 04:46:18 -- accel/accel.sh@21 -- # val= 00:09:04.143 04:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # IFS=: 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # read -r var val 00:09:04.143 04:46:18 -- accel/accel.sh@21 -- # val= 00:09:04.143 04:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # IFS=: 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # read -r var val 00:09:04.143 04:46:18 -- accel/accel.sh@21 -- # val= 00:09:04.143 04:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # IFS=: 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # read -r var val 00:09:04.143 04:46:18 -- accel/accel.sh@21 -- # val= 00:09:04.143 04:46:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # IFS=: 00:09:04.143 04:46:18 -- accel/accel.sh@20 -- # read -r var val 00:09:04.143 ************************************ 00:09:04.143 END TEST accel_crc32c_C2 00:09:04.143 ************************************ 00:09:04.143 04:46:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:04.143 04:46:18 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:04.143 04:46:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:04.143 00:09:04.143 real 0m5.592s 00:09:04.143 user 0m4.787s 00:09:04.143 sys 0m0.514s 00:09:04.143 04:46:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.143 04:46:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 04:46:18 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:04.143 04:46:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:04.143 04:46:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.143 04:46:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 ************************************ 00:09:04.143 START TEST accel_copy 00:09:04.143 ************************************ 00:09:04.143 04:46:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:09:04.143 04:46:18 -- accel/accel.sh@16 -- # local accel_opc 00:09:04.143 04:46:18 -- accel/accel.sh@17 -- # local accel_module 00:09:04.143 04:46:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:09:04.143 04:46:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:04.143 04:46:18 -- accel/accel.sh@12 -- # build_accel_config 00:09:04.143 04:46:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:04.143 04:46:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.143 04:46:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.143 04:46:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:04.143 04:46:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:04.143 04:46:18 -- accel/accel.sh@41 -- # local IFS=, 00:09:04.143 04:46:18 -- accel/accel.sh@42 -- # jq -r . 00:09:04.402 [2024-05-15 04:46:18.430563] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:04.402 [2024-05-15 04:46:18.430869] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42516 ] 00:09:04.402 [2024-05-15 04:46:18.609069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.660 [2024-05-15 04:46:18.857410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.186 04:46:21 -- accel/accel.sh@18 -- # out=' 00:09:07.186 SPDK Configuration: 00:09:07.186 Core mask: 0x1 00:09:07.186 00:09:07.186 Accel Perf Configuration: 00:09:07.186 Workload Type: copy 00:09:07.186 Transfer size: 4096 bytes 00:09:07.186 Vector count 1 00:09:07.186 Module: software 00:09:07.186 Queue depth: 32 00:09:07.186 Allocate depth: 32 00:09:07.186 # threads/core: 1 00:09:07.186 Run time: 1 seconds 00:09:07.186 Verify: Yes 00:09:07.186 00:09:07.186 Running for 1 seconds... 00:09:07.186 00:09:07.186 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:07.186 ------------------------------------------------------------------------------------ 00:09:07.186 0,0 951776/s 3717 MiB/s 0 0 00:09:07.186 ==================================================================================== 00:09:07.186 Total 951776/s 3717 MiB/s 0 0' 00:09:07.186 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.186 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.186 04:46:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:07.186 04:46:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:07.186 04:46:21 -- accel/accel.sh@12 -- # build_accel_config 00:09:07.186 04:46:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:07.186 04:46:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:07.186 04:46:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:07.186 04:46:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:07.186 04:46:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:07.186 04:46:21 -- accel/accel.sh@41 -- # local IFS=, 00:09:07.186 04:46:21 -- accel/accel.sh@42 -- # jq -r . 00:09:07.186 [2024-05-15 04:46:21.237390] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:07.186 [2024-05-15 04:46:21.237548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42562 ] 00:09:07.444 [2024-05-15 04:46:21.434115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.444 [2024-05-15 04:46:21.672892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val= 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val= 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val=0x1 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val= 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val= 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val=copy 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@24 -- # accel_opc=copy 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val= 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val=software 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@23 -- # accel_module=software 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val=32 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.011 04:46:21 -- accel/accel.sh@21 -- # val=32 00:09:08.011 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.011 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.012 04:46:21 -- accel/accel.sh@21 -- # val=1 00:09:08.012 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.012 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.012 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.012 04:46:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:08.012 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.012 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.012 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.012 04:46:21 -- accel/accel.sh@21 -- # val=Yes 00:09:08.012 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.012 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.012 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.012 04:46:21 -- accel/accel.sh@21 -- # val= 00:09:08.012 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.012 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.012 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:08.012 04:46:21 -- accel/accel.sh@21 -- # val= 00:09:08.012 04:46:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.012 04:46:21 -- accel/accel.sh@20 -- # IFS=: 00:09:08.012 04:46:21 -- accel/accel.sh@20 -- # read -r var val 00:09:09.911 04:46:23 -- accel/accel.sh@21 -- # val= 00:09:09.911 04:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # IFS=: 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # read -r var val 00:09:09.911 04:46:23 -- accel/accel.sh@21 -- # val= 00:09:09.911 04:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # IFS=: 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # read -r var val 00:09:09.911 04:46:23 -- accel/accel.sh@21 -- # val= 00:09:09.911 04:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # IFS=: 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # read -r var val 00:09:09.911 04:46:23 -- accel/accel.sh@21 -- # val= 00:09:09.911 04:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # IFS=: 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # read -r var val 00:09:09.911 04:46:23 -- accel/accel.sh@21 -- # val= 00:09:09.911 04:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # IFS=: 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # read -r var val 00:09:09.911 04:46:23 -- accel/accel.sh@21 -- # val= 00:09:09.911 04:46:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # IFS=: 00:09:09.911 04:46:23 -- accel/accel.sh@20 -- # read -r var val 00:09:09.911 ************************************ 00:09:09.911 END TEST accel_copy 00:09:09.911 ************************************ 00:09:09.911 04:46:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:09.911 04:46:23 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:09:09.911 04:46:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:09.911 00:09:09.911 real 0m5.624s 00:09:09.911 user 0m4.819s 00:09:09.911 sys 0m0.515s 00:09:09.911 04:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.911 04:46:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.911 04:46:23 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:09.911 04:46:23 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:09.911 04:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.911 04:46:23 -- common/autotest_common.sh@10 -- # set +x 00:09:09.911 ************************************ 00:09:09.911 START TEST accel_fill 00:09:09.911 ************************************ 00:09:09.911 04:46:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:09.911 04:46:23 -- accel/accel.sh@16 -- # local accel_opc 00:09:09.911 04:46:23 -- accel/accel.sh@17 -- # local accel_module 00:09:09.911 04:46:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:09.911 04:46:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:09.911 04:46:23 -- accel/accel.sh@12 -- # build_accel_config 00:09:09.911 04:46:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:09.911 04:46:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:09.911 04:46:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:09.911 04:46:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:09.911 04:46:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:09.911 04:46:23 -- accel/accel.sh@41 -- # local IFS=, 00:09:09.911 04:46:23 -- accel/accel.sh@42 -- # jq -r . 00:09:09.911 [2024-05-15 04:46:24.114914] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:09.911 [2024-05-15 04:46:24.115079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42616 ] 00:09:10.169 [2024-05-15 04:46:24.271026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.427 [2024-05-15 04:46:24.513108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.009 04:46:26 -- accel/accel.sh@18 -- # out=' 00:09:13.009 SPDK Configuration: 00:09:13.009 Core mask: 0x1 00:09:13.009 00:09:13.009 Accel Perf Configuration: 00:09:13.009 Workload Type: fill 00:09:13.009 Fill pattern: 0x80 00:09:13.009 Transfer size: 4096 bytes 00:09:13.009 Vector count 1 00:09:13.009 Module: software 00:09:13.009 Queue depth: 64 00:09:13.009 Allocate depth: 64 00:09:13.009 # threads/core: 1 00:09:13.009 Run time: 1 seconds 00:09:13.009 Verify: Yes 00:09:13.009 00:09:13.009 Running for 1 seconds... 00:09:13.009 00:09:13.009 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:13.009 ------------------------------------------------------------------------------------ 00:09:13.009 0,0 1443584/s 5639 MiB/s 0 0 00:09:13.009 ==================================================================================== 00:09:13.009 Total 1443584/s 5639 MiB/s 0 0' 00:09:13.009 04:46:26 -- accel/accel.sh@20 -- # IFS=: 00:09:13.009 04:46:26 -- accel/accel.sh@20 -- # read -r var val 00:09:13.009 04:46:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:13.009 04:46:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:13.009 04:46:26 -- accel/accel.sh@12 -- # build_accel_config 00:09:13.009 04:46:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:13.009 04:46:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:13.009 04:46:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:13.009 04:46:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:13.009 04:46:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:13.009 04:46:26 -- accel/accel.sh@41 -- # local IFS=, 00:09:13.009 04:46:26 -- accel/accel.sh@42 -- # jq -r . 00:09:13.009 [2024-05-15 04:46:26.888242] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:13.009 [2024-05-15 04:46:26.888413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42662 ] 00:09:13.009 [2024-05-15 04:46:27.057832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.268 [2024-05-15 04:46:27.299193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val= 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val= 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val=0x1 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val= 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val= 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val=fill 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@24 -- # accel_opc=fill 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val=0x80 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val= 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val=software 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@23 -- # accel_module=software 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val=64 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val=64 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val=1 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val=Yes 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val= 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:13.526 04:46:27 -- accel/accel.sh@21 -- # val= 00:09:13.526 04:46:27 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # IFS=: 00:09:13.526 04:46:27 -- accel/accel.sh@20 -- # read -r var val 00:09:15.425 04:46:29 -- accel/accel.sh@21 -- # val= 00:09:15.425 04:46:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # IFS=: 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # read -r var val 00:09:15.425 04:46:29 -- accel/accel.sh@21 -- # val= 00:09:15.425 04:46:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # IFS=: 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # read -r var val 00:09:15.425 04:46:29 -- accel/accel.sh@21 -- # val= 00:09:15.425 04:46:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # IFS=: 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # read -r var val 00:09:15.425 04:46:29 -- accel/accel.sh@21 -- # val= 00:09:15.425 04:46:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # IFS=: 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # read -r var val 00:09:15.425 04:46:29 -- accel/accel.sh@21 -- # val= 00:09:15.425 04:46:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # IFS=: 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # read -r var val 00:09:15.425 04:46:29 -- accel/accel.sh@21 -- # val= 00:09:15.425 04:46:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # IFS=: 00:09:15.425 04:46:29 -- accel/accel.sh@20 -- # read -r var val 00:09:15.425 ************************************ 00:09:15.425 END TEST accel_fill 00:09:15.425 ************************************ 00:09:15.425 04:46:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:15.425 04:46:29 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:09:15.425 04:46:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:15.425 00:09:15.425 real 0m5.560s 00:09:15.425 user 0m4.747s 00:09:15.425 sys 0m0.520s 00:09:15.425 04:46:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.425 04:46:29 -- common/autotest_common.sh@10 -- # set +x 00:09:15.425 04:46:29 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:15.425 04:46:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:15.425 04:46:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:15.425 04:46:29 -- common/autotest_common.sh@10 -- # set +x 00:09:15.425 ************************************ 00:09:15.425 START TEST accel_copy_crc32c 00:09:15.425 ************************************ 00:09:15.425 04:46:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:09:15.425 04:46:29 -- accel/accel.sh@16 -- # local accel_opc 00:09:15.426 04:46:29 -- accel/accel.sh@17 -- # local accel_module 00:09:15.426 04:46:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:15.426 04:46:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:15.426 04:46:29 -- accel/accel.sh@12 -- # build_accel_config 00:09:15.426 04:46:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:15.426 04:46:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:15.426 04:46:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:15.426 04:46:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:15.426 04:46:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:15.426 04:46:29 -- accel/accel.sh@41 -- # local IFS=, 00:09:15.426 04:46:29 -- accel/accel.sh@42 -- # jq -r . 00:09:15.684 [2024-05-15 04:46:29.731212] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:15.684 [2024-05-15 04:46:29.731446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42728 ] 00:09:15.684 [2024-05-15 04:46:29.914066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.250 [2024-05-15 04:46:30.188665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.780 04:46:32 -- accel/accel.sh@18 -- # out=' 00:09:18.780 SPDK Configuration: 00:09:18.780 Core mask: 0x1 00:09:18.780 00:09:18.780 Accel Perf Configuration: 00:09:18.780 Workload Type: copy_crc32c 00:09:18.780 CRC-32C seed: 0 00:09:18.780 Vector size: 4096 bytes 00:09:18.780 Transfer size: 4096 bytes 00:09:18.780 Vector count 1 00:09:18.780 Module: software 00:09:18.780 Queue depth: 32 00:09:18.780 Allocate depth: 32 00:09:18.780 # threads/core: 1 00:09:18.780 Run time: 1 seconds 00:09:18.780 Verify: Yes 00:09:18.780 00:09:18.781 Running for 1 seconds... 00:09:18.781 00:09:18.781 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:18.781 ------------------------------------------------------------------------------------ 00:09:18.781 0,0 113792/s 444 MiB/s 0 0 00:09:18.781 ==================================================================================== 00:09:18.781 Total 113792/s 444 MiB/s 0 0' 00:09:18.781 04:46:32 -- accel/accel.sh@20 -- # IFS=: 00:09:18.781 04:46:32 -- accel/accel.sh@20 -- # read -r var val 00:09:18.781 04:46:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:18.781 04:46:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:18.781 04:46:32 -- accel/accel.sh@12 -- # build_accel_config 00:09:18.781 04:46:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:18.781 04:46:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:18.781 04:46:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:18.781 04:46:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:18.781 04:46:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:18.781 04:46:32 -- accel/accel.sh@41 -- # local IFS=, 00:09:18.781 04:46:32 -- accel/accel.sh@42 -- # jq -r . 00:09:18.781 [2024-05-15 04:46:32.581313] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:18.781 [2024-05-15 04:46:32.581486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42759 ] 00:09:18.781 [2024-05-15 04:46:32.761615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.039 [2024-05-15 04:46:33.019661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val= 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val= 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val=0x1 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val= 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val= 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val=0 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val= 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val=software 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@23 -- # accel_module=software 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val=32 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val=32 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.297 04:46:33 -- accel/accel.sh@21 -- # val=1 00:09:19.297 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.297 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.298 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.298 04:46:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:19.298 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.298 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.298 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.298 04:46:33 -- accel/accel.sh@21 -- # val=Yes 00:09:19.298 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.298 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.298 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.298 04:46:33 -- accel/accel.sh@21 -- # val= 00:09:19.298 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.298 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.298 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:19.298 04:46:33 -- accel/accel.sh@21 -- # val= 00:09:19.298 04:46:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.298 04:46:33 -- accel/accel.sh@20 -- # IFS=: 00:09:19.298 04:46:33 -- accel/accel.sh@20 -- # read -r var val 00:09:21.197 04:46:35 -- accel/accel.sh@21 -- # val= 00:09:21.197 04:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # IFS=: 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # read -r var val 00:09:21.197 04:46:35 -- accel/accel.sh@21 -- # val= 00:09:21.197 04:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # IFS=: 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # read -r var val 00:09:21.197 04:46:35 -- accel/accel.sh@21 -- # val= 00:09:21.197 04:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # IFS=: 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # read -r var val 00:09:21.197 04:46:35 -- accel/accel.sh@21 -- # val= 00:09:21.197 04:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # IFS=: 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # read -r var val 00:09:21.197 04:46:35 -- accel/accel.sh@21 -- # val= 00:09:21.197 04:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # IFS=: 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # read -r var val 00:09:21.197 04:46:35 -- accel/accel.sh@21 -- # val= 00:09:21.197 04:46:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # IFS=: 00:09:21.197 04:46:35 -- accel/accel.sh@20 -- # read -r var val 00:09:21.197 04:46:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:21.197 04:46:35 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:21.197 04:46:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:21.197 00:09:21.197 real 0m5.677s 00:09:21.197 user 0m4.835s 00:09:21.197 sys 0m0.540s 00:09:21.197 04:46:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.197 ************************************ 00:09:21.197 END TEST accel_copy_crc32c 00:09:21.197 ************************************ 00:09:21.197 04:46:35 -- common/autotest_common.sh@10 -- # set +x 00:09:21.197 04:46:35 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:21.197 04:46:35 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:21.197 04:46:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.197 04:46:35 -- common/autotest_common.sh@10 -- # set +x 00:09:21.197 ************************************ 00:09:21.197 START TEST accel_copy_crc32c_C2 00:09:21.197 ************************************ 00:09:21.197 04:46:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:21.197 04:46:35 -- accel/accel.sh@16 -- # local accel_opc 00:09:21.197 04:46:35 -- accel/accel.sh@17 -- # local accel_module 00:09:21.197 04:46:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:21.197 04:46:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:21.197 04:46:35 -- accel/accel.sh@12 -- # build_accel_config 00:09:21.197 04:46:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:21.197 04:46:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:21.197 04:46:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.197 04:46:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:21.197 04:46:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:21.197 04:46:35 -- accel/accel.sh@41 -- # local IFS=, 00:09:21.197 04:46:35 -- accel/accel.sh@42 -- # jq -r . 00:09:21.455 [2024-05-15 04:46:35.460193] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:21.455 [2024-05-15 04:46:35.460357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42818 ] 00:09:21.455 [2024-05-15 04:46:35.617427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.713 [2024-05-15 04:46:35.865941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.245 04:46:38 -- accel/accel.sh@18 -- # out=' 00:09:24.245 SPDK Configuration: 00:09:24.245 Core mask: 0x1 00:09:24.245 00:09:24.245 Accel Perf Configuration: 00:09:24.245 Workload Type: copy_crc32c 00:09:24.245 CRC-32C seed: 0 00:09:24.245 Vector size: 4096 bytes 00:09:24.245 Transfer size: 8192 bytes 00:09:24.245 Vector count 2 00:09:24.245 Module: software 00:09:24.245 Queue depth: 32 00:09:24.245 Allocate depth: 32 00:09:24.245 # threads/core: 1 00:09:24.245 Run time: 1 seconds 00:09:24.245 Verify: Yes 00:09:24.245 00:09:24.245 Running for 1 seconds... 00:09:24.245 00:09:24.245 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:24.245 ------------------------------------------------------------------------------------ 00:09:24.245 0,0 58400/s 456 MiB/s 0 0 00:09:24.245 ==================================================================================== 00:09:24.245 Total 58400/s 228 MiB/s 0 0' 00:09:24.245 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.245 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.245 04:46:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:24.245 04:46:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:24.245 04:46:38 -- accel/accel.sh@12 -- # build_accel_config 00:09:24.245 04:46:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:24.245 04:46:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.245 04:46:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.245 04:46:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:24.245 04:46:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:24.245 04:46:38 -- accel/accel.sh@41 -- # local IFS=, 00:09:24.245 04:46:38 -- accel/accel.sh@42 -- # jq -r . 00:09:24.245 [2024-05-15 04:46:38.259935] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:24.245 [2024-05-15 04:46:38.260100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42862 ] 00:09:24.245 [2024-05-15 04:46:38.429080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.503 [2024-05-15 04:46:38.681866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.761 04:46:38 -- accel/accel.sh@21 -- # val= 00:09:24.761 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.761 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.761 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.761 04:46:38 -- accel/accel.sh@21 -- # val= 00:09:24.761 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.761 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.761 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.761 04:46:38 -- accel/accel.sh@21 -- # val=0x1 00:09:24.761 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.761 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.761 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.761 04:46:38 -- accel/accel.sh@21 -- # val= 00:09:24.761 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.761 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.761 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val= 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val=0 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val='8192 bytes' 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val= 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val=software 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@23 -- # accel_module=software 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val=32 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val=32 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val=1 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val=Yes 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val= 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:24.762 04:46:38 -- accel/accel.sh@21 -- # val= 00:09:24.762 04:46:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # IFS=: 00:09:24.762 04:46:38 -- accel/accel.sh@20 -- # read -r var val 00:09:27.296 04:46:40 -- accel/accel.sh@21 -- # val= 00:09:27.296 04:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # IFS=: 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # read -r var val 00:09:27.296 04:46:40 -- accel/accel.sh@21 -- # val= 00:09:27.296 04:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # IFS=: 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # read -r var val 00:09:27.296 04:46:40 -- accel/accel.sh@21 -- # val= 00:09:27.296 04:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # IFS=: 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # read -r var val 00:09:27.296 04:46:40 -- accel/accel.sh@21 -- # val= 00:09:27.296 04:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # IFS=: 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # read -r var val 00:09:27.296 04:46:40 -- accel/accel.sh@21 -- # val= 00:09:27.296 04:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # IFS=: 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # read -r var val 00:09:27.296 04:46:40 -- accel/accel.sh@21 -- # val= 00:09:27.296 04:46:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # IFS=: 00:09:27.296 04:46:40 -- accel/accel.sh@20 -- # read -r var val 00:09:27.296 ************************************ 00:09:27.296 END TEST accel_copy_crc32c_C2 00:09:27.296 ************************************ 00:09:27.296 04:46:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:27.296 04:46:40 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:27.296 04:46:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:27.296 00:09:27.296 real 0m5.673s 00:09:27.296 user 0m4.868s 00:09:27.296 sys 0m0.516s 00:09:27.296 04:46:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.296 04:46:40 -- common/autotest_common.sh@10 -- # set +x 00:09:27.296 04:46:41 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:27.296 04:46:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:27.296 04:46:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.296 04:46:41 -- common/autotest_common.sh@10 -- # set +x 00:09:27.296 ************************************ 00:09:27.296 START TEST accel_dualcast 00:09:27.296 ************************************ 00:09:27.296 04:46:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:09:27.296 04:46:41 -- accel/accel.sh@16 -- # local accel_opc 00:09:27.296 04:46:41 -- accel/accel.sh@17 -- # local accel_module 00:09:27.296 04:46:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:09:27.296 04:46:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:27.296 04:46:41 -- accel/accel.sh@12 -- # build_accel_config 00:09:27.296 04:46:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:27.296 04:46:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:27.296 04:46:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:27.296 04:46:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:27.296 04:46:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:27.296 04:46:41 -- accel/accel.sh@41 -- # local IFS=, 00:09:27.296 04:46:41 -- accel/accel.sh@42 -- # jq -r . 00:09:27.296 [2024-05-15 04:46:41.188250] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:27.296 [2024-05-15 04:46:41.188421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42921 ] 00:09:27.296 [2024-05-15 04:46:41.356397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.555 [2024-05-15 04:46:41.628878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.090 04:46:43 -- accel/accel.sh@18 -- # out=' 00:09:30.090 SPDK Configuration: 00:09:30.090 Core mask: 0x1 00:09:30.090 00:09:30.090 Accel Perf Configuration: 00:09:30.090 Workload Type: dualcast 00:09:30.090 Transfer size: 4096 bytes 00:09:30.090 Vector count 1 00:09:30.090 Module: software 00:09:30.090 Queue depth: 32 00:09:30.090 Allocate depth: 32 00:09:30.090 # threads/core: 1 00:09:30.090 Run time: 1 seconds 00:09:30.090 Verify: Yes 00:09:30.090 00:09:30.090 Running for 1 seconds... 00:09:30.090 00:09:30.090 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:30.090 ------------------------------------------------------------------------------------ 00:09:30.090 0,0 719552/s 2810 MiB/s 0 0 00:09:30.090 ==================================================================================== 00:09:30.090 Total 719552/s 2810 MiB/s 0 0' 00:09:30.090 04:46:43 -- accel/accel.sh@20 -- # IFS=: 00:09:30.090 04:46:43 -- accel/accel.sh@20 -- # read -r var val 00:09:30.090 04:46:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:30.090 04:46:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:30.090 04:46:43 -- accel/accel.sh@12 -- # build_accel_config 00:09:30.090 04:46:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:30.090 04:46:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:30.090 04:46:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:30.090 04:46:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:30.090 04:46:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:30.090 04:46:43 -- accel/accel.sh@41 -- # local IFS=, 00:09:30.090 04:46:43 -- accel/accel.sh@42 -- # jq -r . 00:09:30.090 [2024-05-15 04:46:44.126845] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:30.090 [2024-05-15 04:46:44.127014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42967 ] 00:09:30.090 [2024-05-15 04:46:44.282306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.348 [2024-05-15 04:46:44.550751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.607 04:46:44 -- accel/accel.sh@21 -- # val= 00:09:30.607 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.607 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.607 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.607 04:46:44 -- accel/accel.sh@21 -- # val= 00:09:30.607 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.607 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.607 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.607 04:46:44 -- accel/accel.sh@21 -- # val=0x1 00:09:30.607 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.607 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.607 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.607 04:46:44 -- accel/accel.sh@21 -- # val= 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val= 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val=dualcast 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val= 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val=software 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@23 -- # accel_module=software 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val=32 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val=32 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val=1 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val=Yes 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val= 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:30.866 04:46:44 -- accel/accel.sh@21 -- # val= 00:09:30.866 04:46:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # IFS=: 00:09:30.866 04:46:44 -- accel/accel.sh@20 -- # read -r var val 00:09:32.803 04:46:46 -- accel/accel.sh@21 -- # val= 00:09:32.803 04:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.803 04:46:46 -- accel/accel.sh@20 -- # IFS=: 00:09:32.803 04:46:46 -- accel/accel.sh@20 -- # read -r var val 00:09:32.803 04:46:46 -- accel/accel.sh@21 -- # val= 00:09:32.804 04:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.804 04:46:46 -- accel/accel.sh@20 -- # IFS=: 00:09:32.804 04:46:46 -- accel/accel.sh@20 -- # read -r var val 00:09:32.804 04:46:46 -- accel/accel.sh@21 -- # val= 00:09:32.804 04:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.804 04:46:46 -- accel/accel.sh@20 -- # IFS=: 00:09:32.804 04:46:46 -- accel/accel.sh@20 -- # read -r var val 00:09:32.804 04:46:46 -- accel/accel.sh@21 -- # val= 00:09:32.804 04:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.804 04:46:46 -- accel/accel.sh@20 -- # IFS=: 00:09:32.804 04:46:46 -- accel/accel.sh@20 -- # read -r var val 00:09:32.804 04:46:46 -- accel/accel.sh@21 -- # val= 00:09:32.804 04:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.804 04:46:46 -- accel/accel.sh@20 -- # IFS=: 00:09:32.804 04:46:46 -- accel/accel.sh@20 -- # read -r var val 00:09:32.804 04:46:46 -- accel/accel.sh@21 -- # val= 00:09:32.804 04:46:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.804 04:46:46 -- accel/accel.sh@20 -- # IFS=: 00:09:32.804 04:46:46 -- accel/accel.sh@20 -- # read -r var val 00:09:32.804 ************************************ 00:09:32.804 END TEST accel_dualcast 00:09:32.804 ************************************ 00:09:32.804 04:46:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:32.804 04:46:46 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:09:32.804 04:46:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:32.804 00:09:32.804 real 0m5.823s 00:09:32.804 user 0m4.995s 00:09:32.804 sys 0m0.512s 00:09:32.804 04:46:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.804 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:09:32.804 04:46:46 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:32.804 04:46:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:32.804 04:46:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.804 04:46:46 -- common/autotest_common.sh@10 -- # set +x 00:09:32.804 ************************************ 00:09:32.804 START TEST accel_compare 00:09:32.804 ************************************ 00:09:32.804 04:46:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:09:32.804 04:46:46 -- accel/accel.sh@16 -- # local accel_opc 00:09:32.804 04:46:46 -- accel/accel.sh@17 -- # local accel_module 00:09:32.804 04:46:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:09:32.804 04:46:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:32.804 04:46:46 -- accel/accel.sh@12 -- # build_accel_config 00:09:32.804 04:46:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:32.804 04:46:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:32.804 04:46:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:32.804 04:46:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:32.804 04:46:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:32.804 04:46:46 -- accel/accel.sh@41 -- # local IFS=, 00:09:32.804 04:46:46 -- accel/accel.sh@42 -- # jq -r . 00:09:33.078 [2024-05-15 04:46:47.070036] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:33.078 [2024-05-15 04:46:47.070198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43014 ] 00:09:33.078 [2024-05-15 04:46:47.238249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.336 [2024-05-15 04:46:47.507825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.867 04:46:49 -- accel/accel.sh@18 -- # out=' 00:09:35.867 SPDK Configuration: 00:09:35.867 Core mask: 0x1 00:09:35.867 00:09:35.867 Accel Perf Configuration: 00:09:35.867 Workload Type: compare 00:09:35.867 Transfer size: 4096 bytes 00:09:35.867 Vector count 1 00:09:35.867 Module: software 00:09:35.867 Queue depth: 32 00:09:35.867 Allocate depth: 32 00:09:35.867 # threads/core: 1 00:09:35.867 Run time: 1 seconds 00:09:35.867 Verify: Yes 00:09:35.867 00:09:35.867 Running for 1 seconds... 00:09:35.867 00:09:35.868 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:35.868 ------------------------------------------------------------------------------------ 00:09:35.868 0,0 1464448/s 5720 MiB/s 0 0 00:09:35.868 ==================================================================================== 00:09:35.868 Total 1464448/s 5720 MiB/s 0 0' 00:09:35.868 04:46:49 -- accel/accel.sh@20 -- # IFS=: 00:09:35.868 04:46:49 -- accel/accel.sh@20 -- # read -r var val 00:09:35.868 04:46:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:35.868 04:46:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:35.868 04:46:49 -- accel/accel.sh@12 -- # build_accel_config 00:09:35.868 04:46:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:35.868 04:46:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:35.868 04:46:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:35.868 04:46:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:35.868 04:46:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:35.868 04:46:49 -- accel/accel.sh@41 -- # local IFS=, 00:09:35.868 04:46:49 -- accel/accel.sh@42 -- # jq -r . 00:09:35.868 [2024-05-15 04:46:49.975894] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:35.868 [2024-05-15 04:46:49.976073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43065 ] 00:09:36.127 [2024-05-15 04:46:50.149274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.385 [2024-05-15 04:46:50.400232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val= 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val= 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val=0x1 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val= 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val= 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val=compare 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@24 -- # accel_opc=compare 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val= 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val=software 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@23 -- # accel_module=software 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val=32 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val=32 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val=1 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val=Yes 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val= 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:36.645 04:46:50 -- accel/accel.sh@21 -- # val= 00:09:36.645 04:46:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # IFS=: 00:09:36.645 04:46:50 -- accel/accel.sh@20 -- # read -r var val 00:09:38.547 04:46:52 -- accel/accel.sh@21 -- # val= 00:09:38.547 04:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # IFS=: 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # read -r var val 00:09:38.547 04:46:52 -- accel/accel.sh@21 -- # val= 00:09:38.547 04:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # IFS=: 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # read -r var val 00:09:38.547 04:46:52 -- accel/accel.sh@21 -- # val= 00:09:38.547 04:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # IFS=: 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # read -r var val 00:09:38.547 04:46:52 -- accel/accel.sh@21 -- # val= 00:09:38.547 04:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # IFS=: 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # read -r var val 00:09:38.547 04:46:52 -- accel/accel.sh@21 -- # val= 00:09:38.547 04:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # IFS=: 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # read -r var val 00:09:38.547 04:46:52 -- accel/accel.sh@21 -- # val= 00:09:38.547 04:46:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # IFS=: 00:09:38.547 04:46:52 -- accel/accel.sh@20 -- # read -r var val 00:09:38.547 ************************************ 00:09:38.547 END TEST accel_compare 00:09:38.547 ************************************ 00:09:38.547 04:46:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:38.547 04:46:52 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:09:38.547 04:46:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:38.547 00:09:38.547 real 0m5.764s 00:09:38.547 user 0m4.962s 00:09:38.547 sys 0m0.514s 00:09:38.547 04:46:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.547 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:09:38.547 04:46:52 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:38.547 04:46:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:38.547 04:46:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.547 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:09:38.547 ************************************ 00:09:38.547 START TEST accel_xor 00:09:38.547 ************************************ 00:09:38.547 04:46:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:09:38.547 04:46:52 -- accel/accel.sh@16 -- # local accel_opc 00:09:38.547 04:46:52 -- accel/accel.sh@17 -- # local accel_module 00:09:38.547 04:46:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:09:38.547 04:46:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:38.547 04:46:52 -- accel/accel.sh@12 -- # build_accel_config 00:09:38.547 04:46:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:38.547 04:46:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:38.547 04:46:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:38.547 04:46:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:38.547 04:46:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:38.547 04:46:52 -- accel/accel.sh@41 -- # local IFS=, 00:09:38.547 04:46:52 -- accel/accel.sh@42 -- # jq -r . 00:09:38.806 [2024-05-15 04:46:52.899680] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:38.806 [2024-05-15 04:46:52.899942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43124 ] 00:09:39.072 [2024-05-15 04:46:53.073181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.333 [2024-05-15 04:46:53.335029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.865 04:46:55 -- accel/accel.sh@18 -- # out=' 00:09:41.865 SPDK Configuration: 00:09:41.865 Core mask: 0x1 00:09:41.865 00:09:41.865 Accel Perf Configuration: 00:09:41.865 Workload Type: xor 00:09:41.865 Source buffers: 2 00:09:41.865 Transfer size: 4096 bytes 00:09:41.865 Vector count 1 00:09:41.865 Module: software 00:09:41.865 Queue depth: 32 00:09:41.865 Allocate depth: 32 00:09:41.865 # threads/core: 1 00:09:41.865 Run time: 1 seconds 00:09:41.865 Verify: Yes 00:09:41.865 00:09:41.865 Running for 1 seconds... 00:09:41.865 00:09:41.865 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:41.865 ------------------------------------------------------------------------------------ 00:09:41.865 0,0 41984/s 164 MiB/s 0 0 00:09:41.865 ==================================================================================== 00:09:41.865 Total 41984/s 164 MiB/s 0 0' 00:09:41.865 04:46:55 -- accel/accel.sh@20 -- # IFS=: 00:09:41.865 04:46:55 -- accel/accel.sh@20 -- # read -r var val 00:09:41.865 04:46:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:41.865 04:46:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:41.865 04:46:55 -- accel/accel.sh@12 -- # build_accel_config 00:09:41.865 04:46:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:41.865 04:46:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:41.865 04:46:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:41.865 04:46:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:41.865 04:46:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:41.865 04:46:55 -- accel/accel.sh@41 -- # local IFS=, 00:09:41.865 04:46:55 -- accel/accel.sh@42 -- # jq -r . 00:09:41.865 [2024-05-15 04:46:55.743063] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:41.866 [2024-05-15 04:46:55.743222] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43165 ] 00:09:41.866 [2024-05-15 04:46:55.897097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.124 [2024-05-15 04:46:56.143877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val= 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val= 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val=0x1 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val= 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val= 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val=xor 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val=2 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val= 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val=software 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@23 -- # accel_module=software 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val=32 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val=32 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val=1 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val=Yes 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val= 00:09:42.383 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.383 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:42.383 04:46:56 -- accel/accel.sh@21 -- # val= 00:09:42.384 04:46:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.384 04:46:56 -- accel/accel.sh@20 -- # IFS=: 00:09:42.384 04:46:56 -- accel/accel.sh@20 -- # read -r var val 00:09:44.287 04:46:58 -- accel/accel.sh@21 -- # val= 00:09:44.287 04:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # IFS=: 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # read -r var val 00:09:44.287 04:46:58 -- accel/accel.sh@21 -- # val= 00:09:44.287 04:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # IFS=: 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # read -r var val 00:09:44.287 04:46:58 -- accel/accel.sh@21 -- # val= 00:09:44.287 04:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # IFS=: 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # read -r var val 00:09:44.287 04:46:58 -- accel/accel.sh@21 -- # val= 00:09:44.287 04:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # IFS=: 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # read -r var val 00:09:44.287 04:46:58 -- accel/accel.sh@21 -- # val= 00:09:44.287 04:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # IFS=: 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # read -r var val 00:09:44.287 04:46:58 -- accel/accel.sh@21 -- # val= 00:09:44.287 04:46:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # IFS=: 00:09:44.287 04:46:58 -- accel/accel.sh@20 -- # read -r var val 00:09:44.287 ************************************ 00:09:44.287 END TEST accel_xor 00:09:44.287 ************************************ 00:09:44.287 04:46:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:44.287 04:46:58 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:44.287 04:46:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:44.287 00:09:44.287 real 0m5.620s 00:09:44.287 user 0m4.827s 00:09:44.287 sys 0m0.502s 00:09:44.287 04:46:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.287 04:46:58 -- common/autotest_common.sh@10 -- # set +x 00:09:44.287 04:46:58 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:44.287 04:46:58 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:44.287 04:46:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.287 04:46:58 -- common/autotest_common.sh@10 -- # set +x 00:09:44.287 ************************************ 00:09:44.287 START TEST accel_xor 00:09:44.287 ************************************ 00:09:44.287 04:46:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:09:44.287 04:46:58 -- accel/accel.sh@16 -- # local accel_opc 00:09:44.287 04:46:58 -- accel/accel.sh@17 -- # local accel_module 00:09:44.287 04:46:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:09:44.287 04:46:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:44.287 04:46:58 -- accel/accel.sh@12 -- # build_accel_config 00:09:44.287 04:46:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:44.287 04:46:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.287 04:46:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.287 04:46:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:44.287 04:46:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:44.287 04:46:58 -- accel/accel.sh@41 -- # local IFS=, 00:09:44.287 04:46:58 -- accel/accel.sh@42 -- # jq -r . 00:09:44.546 [2024-05-15 04:46:58.568553] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:44.546 [2024-05-15 04:46:58.568871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43226 ] 00:09:44.546 [2024-05-15 04:46:58.736502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.804 [2024-05-15 04:46:58.971677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.339 04:47:01 -- accel/accel.sh@18 -- # out=' 00:09:47.339 SPDK Configuration: 00:09:47.339 Core mask: 0x1 00:09:47.339 00:09:47.339 Accel Perf Configuration: 00:09:47.339 Workload Type: xor 00:09:47.339 Source buffers: 3 00:09:47.339 Transfer size: 4096 bytes 00:09:47.339 Vector count 1 00:09:47.339 Module: software 00:09:47.339 Queue depth: 32 00:09:47.339 Allocate depth: 32 00:09:47.339 # threads/core: 1 00:09:47.339 Run time: 1 seconds 00:09:47.339 Verify: Yes 00:09:47.339 00:09:47.339 Running for 1 seconds... 00:09:47.339 00:09:47.339 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:47.339 ------------------------------------------------------------------------------------ 00:09:47.339 0,0 33504/s 130 MiB/s 0 0 00:09:47.339 ==================================================================================== 00:09:47.339 Total 33504/s 130 MiB/s 0 0' 00:09:47.339 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.339 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.339 04:47:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:47.339 04:47:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:47.339 04:47:01 -- accel/accel.sh@12 -- # build_accel_config 00:09:47.339 04:47:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:47.339 04:47:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:47.339 04:47:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:47.339 04:47:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:47.339 04:47:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:47.339 04:47:01 -- accel/accel.sh@41 -- # local IFS=, 00:09:47.339 04:47:01 -- accel/accel.sh@42 -- # jq -r . 00:09:47.339 [2024-05-15 04:47:01.301659] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:47.339 [2024-05-15 04:47:01.301827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43258 ] 00:09:47.339 [2024-05-15 04:47:01.456193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.598 [2024-05-15 04:47:01.687765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.857 04:47:01 -- accel/accel.sh@21 -- # val= 00:09:47.857 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.857 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.857 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.857 04:47:01 -- accel/accel.sh@21 -- # val= 00:09:47.857 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.857 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.857 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.857 04:47:01 -- accel/accel.sh@21 -- # val=0x1 00:09:47.857 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.857 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.857 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.857 04:47:01 -- accel/accel.sh@21 -- # val= 00:09:47.857 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.857 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.857 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.857 04:47:01 -- accel/accel.sh@21 -- # val= 00:09:47.857 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.857 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.857 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.857 04:47:01 -- accel/accel.sh@21 -- # val=xor 00:09:47.857 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val=3 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val= 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val=software 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@23 -- # accel_module=software 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val=32 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val=32 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val=1 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val=Yes 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val= 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:47.858 04:47:01 -- accel/accel.sh@21 -- # val= 00:09:47.858 04:47:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # IFS=: 00:09:47.858 04:47:01 -- accel/accel.sh@20 -- # read -r var val 00:09:49.762 04:47:03 -- accel/accel.sh@21 -- # val= 00:09:49.762 04:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # IFS=: 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # read -r var val 00:09:49.762 04:47:03 -- accel/accel.sh@21 -- # val= 00:09:49.762 04:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # IFS=: 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # read -r var val 00:09:49.762 04:47:03 -- accel/accel.sh@21 -- # val= 00:09:49.762 04:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # IFS=: 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # read -r var val 00:09:49.762 04:47:03 -- accel/accel.sh@21 -- # val= 00:09:49.762 04:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # IFS=: 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # read -r var val 00:09:49.762 04:47:03 -- accel/accel.sh@21 -- # val= 00:09:49.762 04:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # IFS=: 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # read -r var val 00:09:49.762 04:47:03 -- accel/accel.sh@21 -- # val= 00:09:49.762 04:47:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # IFS=: 00:09:49.762 04:47:03 -- accel/accel.sh@20 -- # read -r var val 00:09:49.762 04:47:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:49.762 04:47:03 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:49.762 04:47:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:49.762 00:09:49.762 real 0m5.458s 00:09:49.762 user 0m4.654s 00:09:49.762 sys 0m0.510s 00:09:49.762 04:47:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.762 04:47:03 -- common/autotest_common.sh@10 -- # set +x 00:09:49.762 ************************************ 00:09:49.762 END TEST accel_xor 00:09:49.762 ************************************ 00:09:49.762 04:47:03 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:49.762 04:47:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:49.762 04:47:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.762 04:47:03 -- common/autotest_common.sh@10 -- # set +x 00:09:49.762 ************************************ 00:09:49.762 START TEST accel_dif_verify 00:09:49.762 ************************************ 00:09:49.762 04:47:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:09:49.762 04:47:03 -- accel/accel.sh@16 -- # local accel_opc 00:09:49.762 04:47:03 -- accel/accel.sh@17 -- # local accel_module 00:09:49.762 04:47:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:09:49.762 04:47:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:49.762 04:47:03 -- accel/accel.sh@12 -- # build_accel_config 00:09:49.762 04:47:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:49.762 04:47:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.762 04:47:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.762 04:47:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:49.762 04:47:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:49.762 04:47:03 -- accel/accel.sh@41 -- # local IFS=, 00:09:49.762 04:47:03 -- accel/accel.sh@42 -- # jq -r . 00:09:50.020 [2024-05-15 04:47:04.078899] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:50.021 [2024-05-15 04:47:04.079067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43317 ] 00:09:50.021 [2024-05-15 04:47:04.234735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.279 [2024-05-15 04:47:04.466322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.812 04:47:06 -- accel/accel.sh@18 -- # out=' 00:09:52.812 SPDK Configuration: 00:09:52.812 Core mask: 0x1 00:09:52.812 00:09:52.812 Accel Perf Configuration: 00:09:52.812 Workload Type: dif_verify 00:09:52.812 Vector size: 4096 bytes 00:09:52.812 Transfer size: 4096 bytes 00:09:52.812 Block size: 512 bytes 00:09:52.812 Metadata size: 8 bytes 00:09:52.812 Vector count 1 00:09:52.812 Module: software 00:09:52.812 Queue depth: 32 00:09:52.812 Allocate depth: 32 00:09:52.812 # threads/core: 1 00:09:52.812 Run time: 1 seconds 00:09:52.812 Verify: No 00:09:52.812 00:09:52.812 Running for 1 seconds... 00:09:52.813 00:09:52.813 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:52.813 ------------------------------------------------------------------------------------ 00:09:52.813 0,0 58784/s 233 MiB/s 0 0 00:09:52.813 ==================================================================================== 00:09:52.813 Total 58784/s 229 MiB/s 0 0' 00:09:52.813 04:47:06 -- accel/accel.sh@20 -- # IFS=: 00:09:52.813 04:47:06 -- accel/accel.sh@20 -- # read -r var val 00:09:52.813 04:47:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:52.813 04:47:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:52.813 04:47:06 -- accel/accel.sh@12 -- # build_accel_config 00:09:52.813 04:47:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:52.813 04:47:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.813 04:47:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.813 04:47:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:52.813 04:47:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:52.813 04:47:06 -- accel/accel.sh@41 -- # local IFS=, 00:09:52.813 04:47:06 -- accel/accel.sh@42 -- # jq -r . 00:09:52.813 [2024-05-15 04:47:06.807242] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:52.813 [2024-05-15 04:47:06.807397] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43358 ] 00:09:52.813 [2024-05-15 04:47:06.959707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.071 [2024-05-15 04:47:07.191390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val= 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val= 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val=0x1 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val= 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val= 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val=dif_verify 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val='512 bytes' 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val='8 bytes' 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val= 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val=software 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@23 -- # accel_module=software 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val=32 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val=32 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val=1 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val=No 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val= 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:53.329 04:47:07 -- accel/accel.sh@21 -- # val= 00:09:53.329 04:47:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # IFS=: 00:09:53.329 04:47:07 -- accel/accel.sh@20 -- # read -r var val 00:09:55.231 04:47:09 -- accel/accel.sh@21 -- # val= 00:09:55.231 04:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # IFS=: 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # read -r var val 00:09:55.231 04:47:09 -- accel/accel.sh@21 -- # val= 00:09:55.231 04:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # IFS=: 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # read -r var val 00:09:55.231 04:47:09 -- accel/accel.sh@21 -- # val= 00:09:55.231 04:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # IFS=: 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # read -r var val 00:09:55.231 04:47:09 -- accel/accel.sh@21 -- # val= 00:09:55.231 04:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # IFS=: 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # read -r var val 00:09:55.231 04:47:09 -- accel/accel.sh@21 -- # val= 00:09:55.231 04:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # IFS=: 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # read -r var val 00:09:55.231 04:47:09 -- accel/accel.sh@21 -- # val= 00:09:55.231 04:47:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # IFS=: 00:09:55.231 04:47:09 -- accel/accel.sh@20 -- # read -r var val 00:09:55.231 04:47:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:55.231 04:47:09 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:09:55.231 04:47:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:55.231 00:09:55.231 real 0m5.442s 00:09:55.231 user 0m4.650s 00:09:55.231 sys 0m0.502s 00:09:55.231 04:47:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.231 ************************************ 00:09:55.231 END TEST accel_dif_verify 00:09:55.231 ************************************ 00:09:55.231 04:47:09 -- common/autotest_common.sh@10 -- # set +x 00:09:55.231 04:47:09 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:55.231 04:47:09 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:55.231 04:47:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.231 04:47:09 -- common/autotest_common.sh@10 -- # set +x 00:09:55.231 ************************************ 00:09:55.231 START TEST accel_dif_generate 00:09:55.231 ************************************ 00:09:55.231 04:47:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:09:55.231 04:47:09 -- accel/accel.sh@16 -- # local accel_opc 00:09:55.231 04:47:09 -- accel/accel.sh@17 -- # local accel_module 00:09:55.231 04:47:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:09:55.231 04:47:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:55.231 04:47:09 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.231 04:47:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.231 04:47:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.231 04:47:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.231 04:47:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.231 04:47:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.231 04:47:09 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.231 04:47:09 -- accel/accel.sh@42 -- # jq -r . 00:09:55.489 [2024-05-15 04:47:09.583200] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:55.489 [2024-05-15 04:47:09.583371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43414 ] 00:09:55.747 [2024-05-15 04:47:09.738743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.747 [2024-05-15 04:47:09.966531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.370 04:47:12 -- accel/accel.sh@18 -- # out=' 00:09:58.370 SPDK Configuration: 00:09:58.370 Core mask: 0x1 00:09:58.370 00:09:58.370 Accel Perf Configuration: 00:09:58.370 Workload Type: dif_generate 00:09:58.370 Vector size: 4096 bytes 00:09:58.370 Transfer size: 4096 bytes 00:09:58.370 Block size: 512 bytes 00:09:58.370 Metadata size: 8 bytes 00:09:58.370 Vector count 1 00:09:58.370 Module: software 00:09:58.370 Queue depth: 32 00:09:58.370 Allocate depth: 32 00:09:58.370 # threads/core: 1 00:09:58.370 Run time: 1 seconds 00:09:58.370 Verify: No 00:09:58.370 00:09:58.370 Running for 1 seconds... 00:09:58.370 00:09:58.370 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:58.370 ------------------------------------------------------------------------------------ 00:09:58.370 0,0 59936/s 237 MiB/s 0 0 00:09:58.370 ==================================================================================== 00:09:58.370 Total 59936/s 234 MiB/s 0 0' 00:09:58.370 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.370 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.370 04:47:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:58.370 04:47:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:58.370 04:47:12 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.370 04:47:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:58.370 04:47:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.370 04:47:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.370 04:47:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:58.370 04:47:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:58.370 04:47:12 -- accel/accel.sh@41 -- # local IFS=, 00:09:58.370 04:47:12 -- accel/accel.sh@42 -- # jq -r . 00:09:58.370 [2024-05-15 04:47:12.300586] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:58.370 [2024-05-15 04:47:12.300783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43454 ] 00:09:58.370 [2024-05-15 04:47:12.451157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.638 [2024-05-15 04:47:12.691269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val= 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val= 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val=0x1 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val= 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val= 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val=dif_generate 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val='512 bytes' 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val='8 bytes' 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val= 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val=software 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@23 -- # accel_module=software 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val=32 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val=32 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val=1 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val=No 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val= 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:09:58.899 04:47:12 -- accel/accel.sh@21 -- # val= 00:09:58.899 04:47:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # IFS=: 00:09:58.899 04:47:12 -- accel/accel.sh@20 -- # read -r var val 00:10:00.801 04:47:14 -- accel/accel.sh@21 -- # val= 00:10:00.801 04:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # IFS=: 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # read -r var val 00:10:00.801 04:47:14 -- accel/accel.sh@21 -- # val= 00:10:00.801 04:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # IFS=: 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # read -r var val 00:10:00.801 04:47:14 -- accel/accel.sh@21 -- # val= 00:10:00.801 04:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # IFS=: 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # read -r var val 00:10:00.801 04:47:14 -- accel/accel.sh@21 -- # val= 00:10:00.801 04:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # IFS=: 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # read -r var val 00:10:00.801 04:47:14 -- accel/accel.sh@21 -- # val= 00:10:00.801 04:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # IFS=: 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # read -r var val 00:10:00.801 04:47:14 -- accel/accel.sh@21 -- # val= 00:10:00.801 04:47:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # IFS=: 00:10:00.801 04:47:14 -- accel/accel.sh@20 -- # read -r var val 00:10:00.801 ************************************ 00:10:00.801 END TEST accel_dif_generate 00:10:00.801 ************************************ 00:10:00.801 04:47:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:00.801 04:47:14 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:00.801 04:47:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:00.801 00:10:00.801 real 0m5.457s 00:10:00.801 user 0m4.681s 00:10:00.801 sys 0m0.484s 00:10:00.801 04:47:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.801 04:47:14 -- common/autotest_common.sh@10 -- # set +x 00:10:00.801 04:47:14 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:00.801 04:47:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:00.801 04:47:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:00.801 04:47:14 -- common/autotest_common.sh@10 -- # set +x 00:10:00.801 ************************************ 00:10:00.801 START TEST accel_dif_generate_copy 00:10:00.801 ************************************ 00:10:00.801 04:47:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:10:00.801 04:47:14 -- accel/accel.sh@16 -- # local accel_opc 00:10:00.801 04:47:14 -- accel/accel.sh@17 -- # local accel_module 00:10:00.801 04:47:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:00.801 04:47:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:00.801 04:47:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:00.801 04:47:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:00.801 04:47:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.801 04:47:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.801 04:47:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:00.801 04:47:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:00.801 04:47:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:00.801 04:47:14 -- accel/accel.sh@42 -- # jq -r . 00:10:01.060 [2024-05-15 04:47:15.091127] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:01.060 [2024-05-15 04:47:15.091300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43513 ] 00:10:01.060 [2024-05-15 04:47:15.253420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.318 [2024-05-15 04:47:15.495084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.851 04:47:17 -- accel/accel.sh@18 -- # out=' 00:10:03.851 SPDK Configuration: 00:10:03.851 Core mask: 0x1 00:10:03.851 00:10:03.851 Accel Perf Configuration: 00:10:03.851 Workload Type: dif_generate_copy 00:10:03.851 Vector size: 4096 bytes 00:10:03.851 Transfer size: 4096 bytes 00:10:03.851 Vector count 1 00:10:03.851 Module: software 00:10:03.851 Queue depth: 32 00:10:03.851 Allocate depth: 32 00:10:03.851 # threads/core: 1 00:10:03.851 Run time: 1 seconds 00:10:03.851 Verify: No 00:10:03.851 00:10:03.851 Running for 1 seconds... 00:10:03.851 00:10:03.851 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:03.851 ------------------------------------------------------------------------------------ 00:10:03.851 0,0 53472/s 212 MiB/s 0 0 00:10:03.851 ==================================================================================== 00:10:03.851 Total 53472/s 208 MiB/s 0 0' 00:10:03.851 04:47:17 -- accel/accel.sh@20 -- # IFS=: 00:10:03.851 04:47:17 -- accel/accel.sh@20 -- # read -r var val 00:10:03.851 04:47:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:03.851 04:47:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:03.851 04:47:17 -- accel/accel.sh@12 -- # build_accel_config 00:10:03.851 04:47:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:03.851 04:47:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.851 04:47:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.851 04:47:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:03.851 04:47:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:03.851 04:47:17 -- accel/accel.sh@41 -- # local IFS=, 00:10:03.851 04:47:17 -- accel/accel.sh@42 -- # jq -r . 00:10:03.851 [2024-05-15 04:47:17.865598] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:03.851 [2024-05-15 04:47:17.866350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43554 ] 00:10:03.851 [2024-05-15 04:47:18.050084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.109 [2024-05-15 04:47:18.295305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val= 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val= 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val=0x1 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val= 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val= 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val= 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val=software 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@23 -- # accel_module=software 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val=32 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val=32 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val=1 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val=No 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val= 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:04.368 04:47:18 -- accel/accel.sh@21 -- # val= 00:10:04.368 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:10:04.368 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:10:06.899 04:47:20 -- accel/accel.sh@21 -- # val= 00:10:06.899 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:10:06.899 04:47:20 -- accel/accel.sh@21 -- # val= 00:10:06.899 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:10:06.899 04:47:20 -- accel/accel.sh@21 -- # val= 00:10:06.899 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:10:06.899 04:47:20 -- accel/accel.sh@21 -- # val= 00:10:06.899 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:10:06.899 04:47:20 -- accel/accel.sh@21 -- # val= 00:10:06.899 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:10:06.899 04:47:20 -- accel/accel.sh@21 -- # val= 00:10:06.899 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:10:06.899 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:10:06.899 ************************************ 00:10:06.899 END TEST accel_dif_generate_copy 00:10:06.899 ************************************ 00:10:06.899 04:47:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:06.899 04:47:20 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:06.899 04:47:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:06.899 00:10:06.899 real 0m5.575s 00:10:06.899 user 0m4.765s 00:10:06.899 sys 0m0.512s 00:10:06.899 04:47:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.899 04:47:20 -- common/autotest_common.sh@10 -- # set +x 00:10:06.899 04:47:20 -- accel/accel.sh@107 -- # [[ n == y ]] 00:10:06.899 04:47:20 -- accel/accel.sh@116 -- # [[ n == y ]] 00:10:06.899 04:47:20 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:06.899 04:47:20 -- accel/accel.sh@129 -- # build_accel_config 00:10:06.899 04:47:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:06.899 04:47:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:06.899 04:47:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:06.899 04:47:20 -- common/autotest_common.sh@10 -- # set +x 00:10:06.899 04:47:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.899 04:47:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.899 04:47:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:06.899 04:47:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:06.899 04:47:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:06.899 04:47:20 -- accel/accel.sh@42 -- # jq -r . 00:10:06.899 ************************************ 00:10:06.899 START TEST accel_dif_functional_tests 00:10:06.899 ************************************ 00:10:06.899 04:47:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:06.899 [2024-05-15 04:47:20.733466] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:06.899 [2024-05-15 04:47:20.733636] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43610 ] 00:10:06.899 [2024-05-15 04:47:20.901908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.156 [2024-05-15 04:47:21.139434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.156 [2024-05-15 04:47:21.139559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.156 [2024-05-15 04:47:21.139559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.415 00:10:07.415 00:10:07.415 CUnit - A unit testing framework for C - Version 2.1-3 00:10:07.415 http://cunit.sourceforge.net/ 00:10:07.415 00:10:07.415 00:10:07.415 Suite: accel_dif 00:10:07.415 Test: verify: DIF generated, GUARD check ...passed 00:10:07.415 Test: verify: DIF generated, APPTAG check ...passed 00:10:07.415 Test: verify: DIF generated, REFTAG check ...passed 00:10:07.415 Test: verify: DIF not generated, GUARD check ...[2024-05-15 04:47:21.608660] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:07.415 passed 00:10:07.415 Test: verify: DIF not generated, APPTAG check ...passed 00:10:07.415 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 04:47:21.608981] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:07.415 [2024-05-15 04:47:21.609060] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:07.415 [2024-05-15 04:47:21.609111] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:07.415 [2024-05-15 04:47:21.609158] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:07.415 passed 00:10:07.415 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:07.415 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 04:47:21.609201] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:07.415 [2024-05-15 04:47:21.609347] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:07.415 passed 00:10:07.415 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:07.415 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:07.415 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:07.415 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 04:47:21.609734] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:07.415 passed 00:10:07.415 Test: generate copy: DIF generated, GUARD check ...passed 00:10:07.415 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:07.415 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:07.415 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:07.415 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:07.415 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:07.415 Test: generate copy: iovecs-len validate ...[2024-05-15 04:47:21.610295] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:07.415 passed 00:10:07.415 Test: generate copy: buffer alignment validate ...passed 00:10:07.415 00:10:07.415 Run Summary: Type Total Ran Passed Failed Inactive 00:10:07.415 suites 1 1 n/a 0 0 00:10:07.415 tests 20 20 20 0 0 00:10:07.415 asserts 204 204 204 0 n/a 00:10:07.415 00:10:07.415 Elapsed time = 0.000 seconds 00:10:09.317 00:10:09.317 real 0m2.533s 00:10:09.317 user 0m5.054s 00:10:09.317 sys 0m0.344s 00:10:09.317 04:47:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.317 04:47:23 -- common/autotest_common.sh@10 -- # set +x 00:10:09.317 ************************************ 00:10:09.317 END TEST accel_dif_functional_tests 00:10:09.317 ************************************ 00:10:09.317 ************************************ 00:10:09.317 END TEST accel 00:10:09.317 ************************************ 00:10:09.317 00:10:09.317 real 1m24.837s 00:10:09.317 user 1m14.949s 00:10:09.317 sys 0m8.885s 00:10:09.317 04:47:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.317 04:47:23 -- common/autotest_common.sh@10 -- # set +x 00:10:09.317 04:47:23 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:09.317 04:47:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:09.317 04:47:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:09.317 04:47:23 -- common/autotest_common.sh@10 -- # set +x 00:10:09.317 ************************************ 00:10:09.317 START TEST accel_rpc 00:10:09.317 ************************************ 00:10:09.317 04:47:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:09.317 * Looking for test storage... 00:10:09.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:09.317 04:47:23 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:09.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.317 04:47:23 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=43716 00:10:09.317 04:47:23 -- accel/accel_rpc.sh@15 -- # waitforlisten 43716 00:10:09.317 04:47:23 -- common/autotest_common.sh@819 -- # '[' -z 43716 ']' 00:10:09.317 04:47:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.317 04:47:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:09.317 04:47:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.317 04:47:23 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:09.317 04:47:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:09.317 04:47:23 -- common/autotest_common.sh@10 -- # set +x 00:10:09.317 [2024-05-15 04:47:23.434375] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:09.317 [2024-05-15 04:47:23.434556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43716 ] 00:10:09.574 [2024-05-15 04:47:23.590407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.832 [2024-05-15 04:47:23.841658] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:09.832 [2024-05-15 04:47:23.842051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.090 04:47:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:10.090 04:47:24 -- common/autotest_common.sh@852 -- # return 0 00:10:10.090 04:47:24 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:10.090 04:47:24 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:10.090 04:47:24 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:10.090 04:47:24 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:10.090 04:47:24 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:10.090 04:47:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:10.090 04:47:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:10.090 04:47:24 -- common/autotest_common.sh@10 -- # set +x 00:10:10.090 ************************************ 00:10:10.090 START TEST accel_assign_opcode 00:10:10.090 ************************************ 00:10:10.090 04:47:24 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:10:10.090 04:47:24 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:10.090 04:47:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:10.090 04:47:24 -- common/autotest_common.sh@10 -- # set +x 00:10:10.090 [2024-05-15 04:47:24.178808] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:10.090 04:47:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:10.090 04:47:24 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:10.090 04:47:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:10.090 04:47:24 -- common/autotest_common.sh@10 -- # set +x 00:10:10.090 [2024-05-15 04:47:24.190817] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:10.090 04:47:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:10.090 04:47:24 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:10.090 04:47:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:10.090 04:47:24 -- common/autotest_common.sh@10 -- # set +x 00:10:11.026 04:47:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:11.026 04:47:25 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:11.026 04:47:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:11.026 04:47:25 -- common/autotest_common.sh@10 -- # set +x 00:10:11.026 04:47:25 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:11.026 04:47:25 -- accel/accel_rpc.sh@42 -- # grep software 00:10:11.026 04:47:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:11.284 software 00:10:11.284 00:10:11.284 real 0m1.121s 00:10:11.284 user 0m0.057s 00:10:11.284 sys 0m0.013s 00:10:11.284 04:47:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.284 ************************************ 00:10:11.284 END TEST accel_assign_opcode 00:10:11.284 ************************************ 00:10:11.284 04:47:25 -- common/autotest_common.sh@10 -- # set +x 00:10:11.284 04:47:25 -- accel/accel_rpc.sh@55 -- # killprocess 43716 00:10:11.284 04:47:25 -- common/autotest_common.sh@926 -- # '[' -z 43716 ']' 00:10:11.284 04:47:25 -- common/autotest_common.sh@930 -- # kill -0 43716 00:10:11.284 04:47:25 -- common/autotest_common.sh@931 -- # uname 00:10:11.284 04:47:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:11.284 04:47:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 43716 00:10:11.284 04:47:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:11.284 04:47:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:11.284 killing process with pid 43716 00:10:11.284 04:47:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 43716' 00:10:11.284 04:47:25 -- common/autotest_common.sh@945 -- # kill 43716 00:10:11.284 04:47:25 -- common/autotest_common.sh@950 -- # wait 43716 00:10:13.817 ************************************ 00:10:13.817 END TEST accel_rpc 00:10:13.817 ************************************ 00:10:13.817 00:10:13.817 real 0m4.811s 00:10:13.817 user 0m4.419s 00:10:13.817 sys 0m0.714s 00:10:13.817 04:47:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.817 04:47:28 -- common/autotest_common.sh@10 -- # set +x 00:10:14.076 04:47:28 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:14.076 04:47:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:14.076 04:47:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:14.076 04:47:28 -- common/autotest_common.sh@10 -- # set +x 00:10:14.076 ************************************ 00:10:14.076 START TEST app_cmdline 00:10:14.076 ************************************ 00:10:14.076 04:47:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:14.076 * Looking for test storage... 00:10:14.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:14.076 04:47:28 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:14.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.076 04:47:28 -- app/cmdline.sh@17 -- # spdk_tgt_pid=43869 00:10:14.076 04:47:28 -- app/cmdline.sh@18 -- # waitforlisten 43869 00:10:14.076 04:47:28 -- common/autotest_common.sh@819 -- # '[' -z 43869 ']' 00:10:14.076 04:47:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.076 04:47:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:14.076 04:47:28 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:14.076 04:47:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.076 04:47:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:14.076 04:47:28 -- common/autotest_common.sh@10 -- # set +x 00:10:14.334 [2024-05-15 04:47:28.313090] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:14.334 [2024-05-15 04:47:28.313264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43869 ] 00:10:14.334 [2024-05-15 04:47:28.502288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.593 [2024-05-15 04:47:28.742436] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:14.593 [2024-05-15 04:47:28.742639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.505 04:47:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:16.505 04:47:30 -- common/autotest_common.sh@852 -- # return 0 00:10:16.505 04:47:30 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:16.505 { 00:10:16.505 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:10:16.505 "fields": { 00:10:16.505 "major": 24, 00:10:16.505 "minor": 1, 00:10:16.505 "patch": 1, 00:10:16.505 "suffix": "-pre", 00:10:16.505 "commit": "36faa8c31" 00:10:16.505 } 00:10:16.505 } 00:10:16.505 04:47:30 -- app/cmdline.sh@22 -- # expected_methods=() 00:10:16.505 04:47:30 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:16.505 04:47:30 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:16.505 04:47:30 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:16.505 04:47:30 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:16.505 04:47:30 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:16.505 04:47:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.505 04:47:30 -- common/autotest_common.sh@10 -- # set +x 00:10:16.505 04:47:30 -- app/cmdline.sh@26 -- # sort 00:10:16.505 04:47:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.505 04:47:30 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:16.505 04:47:30 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:16.505 04:47:30 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.505 04:47:30 -- common/autotest_common.sh@640 -- # local es=0 00:10:16.505 04:47:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.505 04:47:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.505 04:47:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:16.505 04:47:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.505 04:47:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:16.505 04:47:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.505 04:47:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:16.505 04:47:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.505 04:47:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:16.506 04:47:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.789 request: 00:10:16.789 { 00:10:16.789 "method": "env_dpdk_get_mem_stats", 00:10:16.789 "req_id": 1 00:10:16.789 } 00:10:16.789 Got JSON-RPC error response 00:10:16.789 response: 00:10:16.789 { 00:10:16.789 "code": -32601, 00:10:16.789 "message": "Method not found" 00:10:16.789 } 00:10:16.789 04:47:30 -- common/autotest_common.sh@643 -- # es=1 00:10:16.789 04:47:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:16.789 04:47:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:16.789 04:47:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:16.789 04:47:30 -- app/cmdline.sh@1 -- # killprocess 43869 00:10:16.789 04:47:30 -- common/autotest_common.sh@926 -- # '[' -z 43869 ']' 00:10:16.789 04:47:30 -- common/autotest_common.sh@930 -- # kill -0 43869 00:10:16.789 04:47:30 -- common/autotest_common.sh@931 -- # uname 00:10:16.789 04:47:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:16.789 04:47:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 43869 00:10:16.789 killing process with pid 43869 00:10:16.790 04:47:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:16.790 04:47:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:16.790 04:47:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 43869' 00:10:16.790 04:47:30 -- common/autotest_common.sh@945 -- # kill 43869 00:10:16.790 04:47:30 -- common/autotest_common.sh@950 -- # wait 43869 00:10:20.073 ************************************ 00:10:20.073 END TEST app_cmdline 00:10:20.073 ************************************ 00:10:20.073 00:10:20.073 real 0m5.500s 00:10:20.073 user 0m5.705s 00:10:20.073 sys 0m0.765s 00:10:20.073 04:47:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.073 04:47:33 -- common/autotest_common.sh@10 -- # set +x 00:10:20.073 04:47:33 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:20.073 04:47:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:20.073 04:47:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:20.073 04:47:33 -- common/autotest_common.sh@10 -- # set +x 00:10:20.073 ************************************ 00:10:20.073 START TEST version 00:10:20.073 ************************************ 00:10:20.073 04:47:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:20.073 * Looking for test storage... 00:10:20.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:20.073 04:47:33 -- app/version.sh@17 -- # get_header_version major 00:10:20.073 04:47:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:20.073 04:47:33 -- app/version.sh@14 -- # tr -d '"' 00:10:20.073 04:47:33 -- app/version.sh@14 -- # cut -f2 00:10:20.073 04:47:33 -- app/version.sh@17 -- # major=24 00:10:20.073 04:47:33 -- app/version.sh@18 -- # get_header_version minor 00:10:20.073 04:47:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:20.073 04:47:33 -- app/version.sh@14 -- # cut -f2 00:10:20.073 04:47:33 -- app/version.sh@14 -- # tr -d '"' 00:10:20.073 04:47:33 -- app/version.sh@18 -- # minor=1 00:10:20.073 04:47:33 -- app/version.sh@19 -- # get_header_version patch 00:10:20.073 04:47:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:20.073 04:47:33 -- app/version.sh@14 -- # tr -d '"' 00:10:20.073 04:47:33 -- app/version.sh@14 -- # cut -f2 00:10:20.073 04:47:33 -- app/version.sh@19 -- # patch=1 00:10:20.073 04:47:33 -- app/version.sh@20 -- # get_header_version suffix 00:10:20.074 04:47:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:20.074 04:47:33 -- app/version.sh@14 -- # tr -d '"' 00:10:20.074 04:47:33 -- app/version.sh@14 -- # cut -f2 00:10:20.074 04:47:33 -- app/version.sh@20 -- # suffix=-pre 00:10:20.074 04:47:33 -- app/version.sh@22 -- # version=24.1 00:10:20.074 04:47:33 -- app/version.sh@25 -- # (( patch != 0 )) 00:10:20.074 04:47:33 -- app/version.sh@25 -- # version=24.1.1 00:10:20.074 04:47:33 -- app/version.sh@28 -- # version=24.1.1rc0 00:10:20.074 04:47:33 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:20.074 04:47:33 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:20.074 04:47:33 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:10:20.074 04:47:33 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:10:20.074 ************************************ 00:10:20.074 END TEST version 00:10:20.074 ************************************ 00:10:20.074 00:10:20.074 real 0m0.145s 00:10:20.074 user 0m0.089s 00:10:20.074 sys 0m0.092s 00:10:20.074 04:47:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.074 04:47:33 -- common/autotest_common.sh@10 -- # set +x 00:10:20.074 04:47:33 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:10:20.074 04:47:33 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:10:20.074 04:47:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:20.074 04:47:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:20.074 04:47:33 -- common/autotest_common.sh@10 -- # set +x 00:10:20.074 ************************************ 00:10:20.074 START TEST blockdev_general 00:10:20.074 ************************************ 00:10:20.074 04:47:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:10:20.074 * Looking for test storage... 00:10:20.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:20.074 04:47:33 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:20.074 04:47:33 -- bdev/nbd_common.sh@6 -- # set -e 00:10:20.074 04:47:33 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:20.074 04:47:33 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:20.074 04:47:33 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:20.074 04:47:33 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:20.074 04:47:33 -- bdev/blockdev.sh@18 -- # : 00:10:20.074 04:47:33 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:10:20.074 04:47:33 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:10:20.074 04:47:33 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:10:20.074 04:47:33 -- bdev/blockdev.sh@672 -- # uname -s 00:10:20.074 04:47:33 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:10:20.074 04:47:33 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:10:20.074 04:47:33 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:10:20.074 04:47:33 -- bdev/blockdev.sh@681 -- # crypto_device= 00:10:20.074 04:47:33 -- bdev/blockdev.sh@682 -- # dek= 00:10:20.074 04:47:33 -- bdev/blockdev.sh@683 -- # env_ctx= 00:10:20.074 04:47:33 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:10:20.074 04:47:33 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:10:20.074 04:47:33 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:10:20.074 04:47:33 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:10:20.074 04:47:33 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:10:20.074 04:47:33 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=44084 00:10:20.074 04:47:33 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:20.074 04:47:33 -- bdev/blockdev.sh@47 -- # waitforlisten 44084 00:10:20.074 04:47:33 -- common/autotest_common.sh@819 -- # '[' -z 44084 ']' 00:10:20.074 04:47:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.074 04:47:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:20.074 04:47:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.074 04:47:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:20.074 04:47:33 -- common/autotest_common.sh@10 -- # set +x 00:10:20.074 04:47:33 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:10:20.074 [2024-05-15 04:47:34.081507] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:20.074 [2024-05-15 04:47:34.081685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44084 ] 00:10:20.074 [2024-05-15 04:47:34.270588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.333 [2024-05-15 04:47:34.510361] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:20.333 [2024-05-15 04:47:34.510565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.899 04:47:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:20.899 04:47:34 -- common/autotest_common.sh@852 -- # return 0 00:10:20.899 04:47:34 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:10:20.899 04:47:34 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:10:20.899 04:47:34 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:10:20.899 04:47:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.899 04:47:34 -- common/autotest_common.sh@10 -- # set +x 00:10:21.836 [2024-05-15 04:47:35.956052] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:21.836 [2024-05-15 04:47:35.956148] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:21.836 00:10:21.836 [2024-05-15 04:47:35.964001] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:21.836 [2024-05-15 04:47:35.964040] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:21.836 00:10:21.836 Malloc0 00:10:22.094 Malloc1 00:10:22.094 Malloc2 00:10:22.094 Malloc3 00:10:22.094 Malloc4 00:10:22.094 Malloc5 00:10:22.094 Malloc6 00:10:22.353 Malloc7 00:10:22.353 Malloc8 00:10:22.353 Malloc9 00:10:22.353 [2024-05-15 04:47:36.470682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:22.353 [2024-05-15 04:47:36.470913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.353 [2024-05-15 04:47:36.470961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:10:22.353 [2024-05-15 04:47:36.470993] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.353 TestPT 00:10:22.353 [2024-05-15 04:47:36.472808] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.353 [2024-05-15 04:47:36.472853] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:22.353 04:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.353 04:47:36 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:10:22.353 5000+0 records in 00:10:22.353 5000+0 records out 00:10:22.353 10240000 bytes (10 MB) copied, 0.0296586 s, 345 MB/s 00:10:22.353 04:47:36 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:10:22.353 04:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:22.353 04:47:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.353 AIO0 00:10:22.353 04:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.353 04:47:36 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:10:22.353 04:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:22.353 04:47:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.353 04:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.353 04:47:36 -- bdev/blockdev.sh@738 -- # cat 00:10:22.353 04:47:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:10:22.353 04:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:22.354 04:47:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 04:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.613 04:47:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:10:22.613 04:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:22.613 04:47:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 04:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.613 04:47:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:22.613 04:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:22.613 04:47:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 04:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.613 04:47:36 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:10:22.613 04:47:36 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:10:22.613 04:47:36 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:10:22.613 04:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:22.613 04:47:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 04:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.613 04:47:36 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:10:22.613 04:47:36 -- bdev/blockdev.sh@747 -- # jq -r .name 00:10:22.614 04:47:36 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "c4dadbeb-0e61-46ca-bec5-c2ca8dab92d9"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c4dadbeb-0e61-46ca-bec5-c2ca8dab92d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "dfab3214-9bfd-536a-9108-772a22cf34f4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "dfab3214-9bfd-536a-9108-772a22cf34f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3bb91de0-68d6-527e-82b6-06214792e107"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3bb91de0-68d6-527e-82b6-06214792e107",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "0dbd460d-3431-54cc-8d5e-46557a84b415"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0dbd460d-3431-54cc-8d5e-46557a84b415",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7dbadb9f-acaf-5d1f-b7ad-454f94bfa9cb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7dbadb9f-acaf-5d1f-b7ad-454f94bfa9cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6b2acf00-8d26-55c2-9e17-09c39649779b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b2acf00-8d26-55c2-9e17-09c39649779b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "0ce50045-347b-5202-b28c-72a7ae7649db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0ce50045-347b-5202-b28c-72a7ae7649db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a796cf2f-8671-5865-9b1b-2bcd43f49f05"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a796cf2f-8671-5865-9b1b-2bcd43f49f05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "f92ef545-1b2a-5eaa-af59-ad929102201a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f92ef545-1b2a-5eaa-af59-ad929102201a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a7b2aaba-8558-5857-9970-4caff91a2ff8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7b2aaba-8558-5857-9970-4caff91a2ff8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "9582c19d-7d00-5c2e-8d46-cc0a85192a14"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9582c19d-7d00-5c2e-8d46-cc0a85192a14",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "77d5ceb4-2762-50c1-a305-2f3cd36f8729"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "77d5ceb4-2762-50c1-a305-2f3cd36f8729",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5da9134e-0cab-4237-87ed-a272db271f8a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5da9134e-0cab-4237-87ed-a272db271f8a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5da9134e-0cab-4237-87ed-a272db271f8a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "320baa88-cb19-4316-afb6-60a122497811",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "ca9c360a-b038-440e-8bba-0bee27f55792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "78b98a3f-89b2-45b9-af6e-8fcf3e8859e1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "78b98a3f-89b2-45b9-af6e-8fcf3e8859e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "78b98a3f-89b2-45b9-af6e-8fcf3e8859e1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "46d13fb8-a4c3-45a4-868c-484f131a7054",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "95ae5ecc-8fd9-4b6b-95eb-31aec6f8625d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "96217c7e-f1a1-49d3-9a16-44f7aaa6ad72"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "96217c7e-f1a1-49d3-9a16-44f7aaa6ad72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "96217c7e-f1a1-49d3-9a16-44f7aaa6ad72",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2b9713bd-cbd5-4981-bb91-7d3e6e8f922b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "f76c4071-b4a1-4e7f-8ec6-76d7ffa4d52a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e26c3b75-e897-42b4-877a-6b1d5b930de7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e26c3b75-e897-42b4-877a-6b1d5b930de7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:10:22.614 04:47:36 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:10:22.614 04:47:36 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:10:22.614 04:47:36 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:10:22.614 04:47:36 -- bdev/blockdev.sh@752 -- # killprocess 44084 00:10:22.614 04:47:36 -- common/autotest_common.sh@926 -- # '[' -z 44084 ']' 00:10:22.614 04:47:36 -- common/autotest_common.sh@930 -- # kill -0 44084 00:10:22.614 04:47:36 -- common/autotest_common.sh@931 -- # uname 00:10:22.614 04:47:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:22.614 04:47:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 44084 00:10:22.873 killing process with pid 44084 00:10:22.873 04:47:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:22.873 04:47:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:22.873 04:47:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 44084' 00:10:22.873 04:47:36 -- common/autotest_common.sh@945 -- # kill 44084 00:10:22.873 04:47:36 -- common/autotest_common.sh@950 -- # wait 44084 00:10:27.054 04:47:40 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:27.054 04:47:40 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:10:27.054 04:47:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:27.054 04:47:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:27.054 04:47:40 -- common/autotest_common.sh@10 -- # set +x 00:10:27.054 ************************************ 00:10:27.054 START TEST bdev_hello_world 00:10:27.054 ************************************ 00:10:27.054 04:47:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:10:27.054 [2024-05-15 04:47:40.636646] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:27.054 [2024-05-15 04:47:40.636992] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44197 ] 00:10:27.054 [2024-05-15 04:47:40.801288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.054 [2024-05-15 04:47:40.995611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.311 [2024-05-15 04:47:41.417804] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:27.311 [2024-05-15 04:47:41.417898] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:27.311 [2024-05-15 04:47:41.425737] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:27.311 [2024-05-15 04:47:41.425800] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:27.311 [2024-05-15 04:47:41.433791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:27.311 [2024-05-15 04:47:41.433833] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:27.311 [2024-05-15 04:47:41.433861] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:27.569 [2024-05-15 04:47:41.607795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:27.569 [2024-05-15 04:47:41.607892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.569 [2024-05-15 04:47:41.607940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:10:27.569 [2024-05-15 04:47:41.607967] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.569 [2024-05-15 04:47:41.610025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.569 [2024-05-15 04:47:41.610068] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:27.828 [2024-05-15 04:47:41.893874] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:27.828 [2024-05-15 04:47:41.893961] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:10:27.828 [2024-05-15 04:47:41.894027] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:27.828 [2024-05-15 04:47:41.894077] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:27.828 [2024-05-15 04:47:41.894134] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:27.828 [2024-05-15 04:47:41.894164] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:27.828 [2024-05-15 04:47:41.894205] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:27.828 00:10:27.828 [2024-05-15 04:47:41.894233] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:29.730 00:10:29.730 real 0m3.461s 00:10:29.730 user 0m2.696s 00:10:29.730 sys 0m0.557s 00:10:29.730 ************************************ 00:10:29.730 END TEST bdev_hello_world 00:10:29.730 ************************************ 00:10:29.730 04:47:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.730 04:47:43 -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 04:47:43 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:10:29.988 04:47:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:29.988 04:47:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.988 04:47:43 -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 ************************************ 00:10:29.988 START TEST bdev_bounds 00:10:29.988 ************************************ 00:10:29.988 Process bdevio pid: 44259 00:10:29.988 04:47:44 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:10:29.988 04:47:44 -- bdev/blockdev.sh@288 -- # bdevio_pid=44259 00:10:29.988 04:47:44 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:29.988 04:47:44 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 44259' 00:10:29.988 04:47:44 -- bdev/blockdev.sh@291 -- # waitforlisten 44259 00:10:29.988 04:47:44 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:29.988 04:47:44 -- common/autotest_common.sh@819 -- # '[' -z 44259 ']' 00:10:29.988 04:47:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.988 04:47:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:29.988 04:47:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.988 04:47:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:29.988 04:47:44 -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 [2024-05-15 04:47:44.152085] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:29.988 [2024-05-15 04:47:44.152267] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44259 ] 00:10:30.247 [2024-05-15 04:47:44.328975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.505 [2024-05-15 04:47:44.568859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.505 [2024-05-15 04:47:44.568999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.505 [2024-05-15 04:47:44.568999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.072 [2024-05-15 04:47:45.123369] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:31.072 [2024-05-15 04:47:45.123482] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:31.072 [2024-05-15 04:47:45.131337] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:31.072 [2024-05-15 04:47:45.131421] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:31.072 [2024-05-15 04:47:45.139376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:31.072 [2024-05-15 04:47:45.139422] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:31.072 [2024-05-15 04:47:45.139443] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:31.331 [2024-05-15 04:47:45.358091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:31.331 [2024-05-15 04:47:45.358192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.331 [2024-05-15 04:47:45.358249] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:10:31.331 [2024-05-15 04:47:45.358273] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.331 [2024-05-15 04:47:45.360362] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.331 [2024-05-15 04:47:45.360401] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:32.265 04:47:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:32.265 04:47:46 -- common/autotest_common.sh@852 -- # return 0 00:10:32.265 04:47:46 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:32.265 I/O targets: 00:10:32.265 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:10:32.265 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:10:32.265 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:10:32.265 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:10:32.265 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:10:32.265 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:10:32.265 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:10:32.265 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:10:32.265 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:10:32.266 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:10:32.266 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:10:32.266 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:10:32.266 raid0: 131072 blocks of 512 bytes (64 MiB) 00:10:32.266 concat0: 131072 blocks of 512 bytes (64 MiB) 00:10:32.266 raid1: 65536 blocks of 512 bytes (32 MiB) 00:10:32.266 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:10:32.266 00:10:32.266 00:10:32.266 CUnit - A unit testing framework for C - Version 2.1-3 00:10:32.266 http://cunit.sourceforge.net/ 00:10:32.266 00:10:32.266 00:10:32.266 Suite: bdevio tests on: AIO0 00:10:32.266 Test: blockdev write read block ...passed 00:10:32.266 Test: blockdev write zeroes read block ...passed 00:10:32.524 Test: blockdev write zeroes read no split ...passed 00:10:32.524 Test: blockdev write zeroes read split ...passed 00:10:32.524 Test: blockdev write zeroes read split partial ...passed 00:10:32.524 Test: blockdev reset ...passed 00:10:32.524 Test: blockdev write read 8 blocks ...passed 00:10:32.524 Test: blockdev write read size > 128k ...passed 00:10:32.524 Test: blockdev write read invalid size ...passed 00:10:32.524 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.524 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.524 Test: blockdev write read max offset ...passed 00:10:32.524 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.524 Test: blockdev writev readv 8 blocks ...passed 00:10:32.524 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.524 Test: blockdev writev readv block ...passed 00:10:32.524 Test: blockdev writev readv size > 128k ...passed 00:10:32.524 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.524 Test: blockdev comparev and writev ...passed 00:10:32.524 Test: blockdev nvme passthru rw ...passed 00:10:32.524 Test: blockdev nvme passthru vendor specific ...passed 00:10:32.524 Test: blockdev nvme admin passthru ...passed 00:10:32.524 Test: blockdev copy ...passed 00:10:32.524 Suite: bdevio tests on: raid1 00:10:32.524 Test: blockdev write read block ...passed 00:10:32.524 Test: blockdev write zeroes read block ...passed 00:10:32.524 Test: blockdev write zeroes read no split ...passed 00:10:32.524 Test: blockdev write zeroes read split ...passed 00:10:32.524 Test: blockdev write zeroes read split partial ...passed 00:10:32.524 Test: blockdev reset ...passed 00:10:32.524 Test: blockdev write read 8 blocks ...passed 00:10:32.524 Test: blockdev write read size > 128k ...passed 00:10:32.524 Test: blockdev write read invalid size ...passed 00:10:32.524 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.524 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.524 Test: blockdev write read max offset ...passed 00:10:32.524 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.524 Test: blockdev writev readv 8 blocks ...passed 00:10:32.524 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.524 Test: blockdev writev readv block ...passed 00:10:32.524 Test: blockdev writev readv size > 128k ...passed 00:10:32.524 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.524 Test: blockdev comparev and writev ...passed 00:10:32.524 Test: blockdev nvme passthru rw ...passed 00:10:32.524 Test: blockdev nvme passthru vendor specific ...passed 00:10:32.524 Test: blockdev nvme admin passthru ...passed 00:10:32.524 Test: blockdev copy ...passed 00:10:32.524 Suite: bdevio tests on: concat0 00:10:32.524 Test: blockdev write read block ...passed 00:10:32.524 Test: blockdev write zeroes read block ...passed 00:10:32.524 Test: blockdev write zeroes read no split ...passed 00:10:32.524 Test: blockdev write zeroes read split ...passed 00:10:32.783 Test: blockdev write zeroes read split partial ...passed 00:10:32.783 Test: blockdev reset ...passed 00:10:32.783 Test: blockdev write read 8 blocks ...passed 00:10:32.783 Test: blockdev write read size > 128k ...passed 00:10:32.783 Test: blockdev write read invalid size ...passed 00:10:32.783 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.783 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.783 Test: blockdev write read max offset ...passed 00:10:32.783 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.783 Test: blockdev writev readv 8 blocks ...passed 00:10:32.783 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.783 Test: blockdev writev readv block ...passed 00:10:32.783 Test: blockdev writev readv size > 128k ...passed 00:10:32.783 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.783 Test: blockdev comparev and writev ...passed 00:10:32.783 Test: blockdev nvme passthru rw ...passed 00:10:32.783 Test: blockdev nvme passthru vendor specific ...passed 00:10:32.783 Test: blockdev nvme admin passthru ...passed 00:10:32.783 Test: blockdev copy ...passed 00:10:32.783 Suite: bdevio tests on: raid0 00:10:32.783 Test: blockdev write read block ...passed 00:10:32.783 Test: blockdev write zeroes read block ...passed 00:10:32.783 Test: blockdev write zeroes read no split ...passed 00:10:32.783 Test: blockdev write zeroes read split ...passed 00:10:32.783 Test: blockdev write zeroes read split partial ...passed 00:10:32.783 Test: blockdev reset ...passed 00:10:32.783 Test: blockdev write read 8 blocks ...passed 00:10:32.783 Test: blockdev write read size > 128k ...passed 00:10:32.783 Test: blockdev write read invalid size ...passed 00:10:32.783 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.783 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.783 Test: blockdev write read max offset ...passed 00:10:32.783 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.783 Test: blockdev writev readv 8 blocks ...passed 00:10:32.783 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.783 Test: blockdev writev readv block ...passed 00:10:32.784 Test: blockdev writev readv size > 128k ...passed 00:10:32.784 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.784 Test: blockdev comparev and writev ...passed 00:10:32.784 Test: blockdev nvme passthru rw ...passed 00:10:32.784 Test: blockdev nvme passthru vendor specific ...passed 00:10:32.784 Test: blockdev nvme admin passthru ...passed 00:10:32.784 Test: blockdev copy ...passed 00:10:32.784 Suite: bdevio tests on: TestPT 00:10:32.784 Test: blockdev write read block ...passed 00:10:32.784 Test: blockdev write zeroes read block ...passed 00:10:32.784 Test: blockdev write zeroes read no split ...passed 00:10:32.784 Test: blockdev write zeroes read split ...passed 00:10:32.784 Test: blockdev write zeroes read split partial ...passed 00:10:32.784 Test: blockdev reset ...passed 00:10:32.784 Test: blockdev write read 8 blocks ...passed 00:10:32.784 Test: blockdev write read size > 128k ...passed 00:10:32.784 Test: blockdev write read invalid size ...passed 00:10:32.784 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.784 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.784 Test: blockdev write read max offset ...passed 00:10:32.784 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.784 Test: blockdev writev readv 8 blocks ...passed 00:10:32.784 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.784 Test: blockdev writev readv block ...passed 00:10:32.784 Test: blockdev writev readv size > 128k ...passed 00:10:32.784 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.784 Test: blockdev comparev and writev ...passed 00:10:32.784 Test: blockdev nvme passthru rw ...passed 00:10:32.784 Test: blockdev nvme passthru vendor specific ...passed 00:10:32.784 Test: blockdev nvme admin passthru ...passed 00:10:32.784 Test: blockdev copy ...passed 00:10:32.784 Suite: bdevio tests on: Malloc2p7 00:10:32.784 Test: blockdev write read block ...passed 00:10:32.784 Test: blockdev write zeroes read block ...passed 00:10:32.784 Test: blockdev write zeroes read no split ...passed 00:10:32.784 Test: blockdev write zeroes read split ...passed 00:10:32.784 Test: blockdev write zeroes read split partial ...passed 00:10:32.784 Test: blockdev reset ...passed 00:10:32.784 Test: blockdev write read 8 blocks ...passed 00:10:32.784 Test: blockdev write read size > 128k ...passed 00:10:32.784 Test: blockdev write read invalid size ...passed 00:10:32.784 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.784 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.784 Test: blockdev write read max offset ...passed 00:10:32.784 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.784 Test: blockdev writev readv 8 blocks ...passed 00:10:32.784 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.784 Test: blockdev writev readv block ...passed 00:10:32.784 Test: blockdev writev readv size > 128k ...passed 00:10:32.784 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.784 Test: blockdev comparev and writev ...passed 00:10:32.784 Test: blockdev nvme passthru rw ...passed 00:10:32.784 Test: blockdev nvme passthru vendor specific ...passed 00:10:32.784 Test: blockdev nvme admin passthru ...passed 00:10:32.784 Test: blockdev copy ...passed 00:10:32.784 Suite: bdevio tests on: Malloc2p6 00:10:32.784 Test: blockdev write read block ...passed 00:10:32.784 Test: blockdev write zeroes read block ...passed 00:10:32.784 Test: blockdev write zeroes read no split ...passed 00:10:33.043 Test: blockdev write zeroes read split ...passed 00:10:33.043 Test: blockdev write zeroes read split partial ...passed 00:10:33.043 Test: blockdev reset ...passed 00:10:33.043 Test: blockdev write read 8 blocks ...passed 00:10:33.043 Test: blockdev write read size > 128k ...passed 00:10:33.043 Test: blockdev write read invalid size ...passed 00:10:33.043 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.043 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.043 Test: blockdev write read max offset ...passed 00:10:33.043 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.043 Test: blockdev writev readv 8 blocks ...passed 00:10:33.043 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.043 Test: blockdev writev readv block ...passed 00:10:33.043 Test: blockdev writev readv size > 128k ...passed 00:10:33.043 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.043 Test: blockdev comparev and writev ...passed 00:10:33.043 Test: blockdev nvme passthru rw ...passed 00:10:33.043 Test: blockdev nvme passthru vendor specific ...passed 00:10:33.043 Test: blockdev nvme admin passthru ...passed 00:10:33.043 Test: blockdev copy ...passed 00:10:33.043 Suite: bdevio tests on: Malloc2p5 00:10:33.043 Test: blockdev write read block ...passed 00:10:33.043 Test: blockdev write zeroes read block ...passed 00:10:33.043 Test: blockdev write zeroes read no split ...passed 00:10:33.043 Test: blockdev write zeroes read split ...passed 00:10:33.043 Test: blockdev write zeroes read split partial ...passed 00:10:33.043 Test: blockdev reset ...passed 00:10:33.043 Test: blockdev write read 8 blocks ...passed 00:10:33.043 Test: blockdev write read size > 128k ...passed 00:10:33.043 Test: blockdev write read invalid size ...passed 00:10:33.043 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.043 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.043 Test: blockdev write read max offset ...passed 00:10:33.043 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.043 Test: blockdev writev readv 8 blocks ...passed 00:10:33.043 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.043 Test: blockdev writev readv block ...passed 00:10:33.043 Test: blockdev writev readv size > 128k ...passed 00:10:33.043 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.043 Test: blockdev comparev and writev ...passed 00:10:33.043 Test: blockdev nvme passthru rw ...passed 00:10:33.043 Test: blockdev nvme passthru vendor specific ...passed 00:10:33.043 Test: blockdev nvme admin passthru ...passed 00:10:33.043 Test: blockdev copy ...passed 00:10:33.043 Suite: bdevio tests on: Malloc2p4 00:10:33.043 Test: blockdev write read block ...passed 00:10:33.043 Test: blockdev write zeroes read block ...passed 00:10:33.043 Test: blockdev write zeroes read no split ...passed 00:10:33.043 Test: blockdev write zeroes read split ...passed 00:10:33.043 Test: blockdev write zeroes read split partial ...passed 00:10:33.043 Test: blockdev reset ...passed 00:10:33.043 Test: blockdev write read 8 blocks ...passed 00:10:33.043 Test: blockdev write read size > 128k ...passed 00:10:33.043 Test: blockdev write read invalid size ...passed 00:10:33.043 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.043 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.043 Test: blockdev write read max offset ...passed 00:10:33.043 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.043 Test: blockdev writev readv 8 blocks ...passed 00:10:33.043 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.044 Test: blockdev writev readv block ...passed 00:10:33.044 Test: blockdev writev readv size > 128k ...passed 00:10:33.044 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.044 Test: blockdev comparev and writev ...passed 00:10:33.044 Test: blockdev nvme passthru rw ...passed 00:10:33.044 Test: blockdev nvme passthru vendor specific ...passed 00:10:33.044 Test: blockdev nvme admin passthru ...passed 00:10:33.044 Test: blockdev copy ...passed 00:10:33.044 Suite: bdevio tests on: Malloc2p3 00:10:33.044 Test: blockdev write read block ...passed 00:10:33.044 Test: blockdev write zeroes read block ...passed 00:10:33.044 Test: blockdev write zeroes read no split ...passed 00:10:33.044 Test: blockdev write zeroes read split ...passed 00:10:33.044 Test: blockdev write zeroes read split partial ...passed 00:10:33.044 Test: blockdev reset ...passed 00:10:33.044 Test: blockdev write read 8 blocks ...passed 00:10:33.044 Test: blockdev write read size > 128k ...passed 00:10:33.044 Test: blockdev write read invalid size ...passed 00:10:33.044 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.044 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.044 Test: blockdev write read max offset ...passed 00:10:33.044 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.044 Test: blockdev writev readv 8 blocks ...passed 00:10:33.044 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.044 Test: blockdev writev readv block ...passed 00:10:33.044 Test: blockdev writev readv size > 128k ...passed 00:10:33.044 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.044 Test: blockdev comparev and writev ...passed 00:10:33.044 Test: blockdev nvme passthru rw ...passed 00:10:33.044 Test: blockdev nvme passthru vendor specific ...passed 00:10:33.044 Test: blockdev nvme admin passthru ...passed 00:10:33.044 Test: blockdev copy ...passed 00:10:33.044 Suite: bdevio tests on: Malloc2p2 00:10:33.044 Test: blockdev write read block ...passed 00:10:33.044 Test: blockdev write zeroes read block ...passed 00:10:33.044 Test: blockdev write zeroes read no split ...passed 00:10:33.044 Test: blockdev write zeroes read split ...passed 00:10:33.303 Test: blockdev write zeroes read split partial ...passed 00:10:33.303 Test: blockdev reset ...passed 00:10:33.303 Test: blockdev write read 8 blocks ...passed 00:10:33.303 Test: blockdev write read size > 128k ...passed 00:10:33.303 Test: blockdev write read invalid size ...passed 00:10:33.303 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.303 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.303 Test: blockdev write read max offset ...passed 00:10:33.303 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.303 Test: blockdev writev readv 8 blocks ...passed 00:10:33.303 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.303 Test: blockdev writev readv block ...passed 00:10:33.303 Test: blockdev writev readv size > 128k ...passed 00:10:33.303 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.303 Test: blockdev comparev and writev ...passed 00:10:33.303 Test: blockdev nvme passthru rw ...passed 00:10:33.304 Test: blockdev nvme passthru vendor specific ...passed 00:10:33.304 Test: blockdev nvme admin passthru ...passed 00:10:33.304 Test: blockdev copy ...passed 00:10:33.304 Suite: bdevio tests on: Malloc2p1 00:10:33.304 Test: blockdev write read block ...passed 00:10:33.304 Test: blockdev write zeroes read block ...passed 00:10:33.304 Test: blockdev write zeroes read no split ...passed 00:10:33.304 Test: blockdev write zeroes read split ...passed 00:10:33.304 Test: blockdev write zeroes read split partial ...passed 00:10:33.304 Test: blockdev reset ...passed 00:10:33.304 Test: blockdev write read 8 blocks ...passed 00:10:33.304 Test: blockdev write read size > 128k ...passed 00:10:33.304 Test: blockdev write read invalid size ...passed 00:10:33.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.304 Test: blockdev write read max offset ...passed 00:10:33.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.304 Test: blockdev writev readv 8 blocks ...passed 00:10:33.304 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.304 Test: blockdev writev readv block ...passed 00:10:33.304 Test: blockdev writev readv size > 128k ...passed 00:10:33.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.304 Test: blockdev comparev and writev ...passed 00:10:33.304 Test: blockdev nvme passthru rw ...passed 00:10:33.304 Test: blockdev nvme passthru vendor specific ...passed 00:10:33.304 Test: blockdev nvme admin passthru ...passed 00:10:33.304 Test: blockdev copy ...passed 00:10:33.304 Suite: bdevio tests on: Malloc2p0 00:10:33.304 Test: blockdev write read block ...passed 00:10:33.304 Test: blockdev write zeroes read block ...passed 00:10:33.304 Test: blockdev write zeroes read no split ...passed 00:10:33.304 Test: blockdev write zeroes read split ...passed 00:10:33.304 Test: blockdev write zeroes read split partial ...passed 00:10:33.304 Test: blockdev reset ...passed 00:10:33.304 Test: blockdev write read 8 blocks ...passed 00:10:33.304 Test: blockdev write read size > 128k ...passed 00:10:33.304 Test: blockdev write read invalid size ...passed 00:10:33.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.304 Test: blockdev write read max offset ...passed 00:10:33.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.304 Test: blockdev writev readv 8 blocks ...passed 00:10:33.304 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.304 Test: blockdev writev readv block ...passed 00:10:33.304 Test: blockdev writev readv size > 128k ...passed 00:10:33.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.304 Test: blockdev comparev and writev ...passed 00:10:33.304 Test: blockdev nvme passthru rw ...passed 00:10:33.304 Test: blockdev nvme passthru vendor specific ...passed 00:10:33.304 Test: blockdev nvme admin passthru ...passed 00:10:33.304 Test: blockdev copy ...passed 00:10:33.304 Suite: bdevio tests on: Malloc1p1 00:10:33.304 Test: blockdev write read block ...passed 00:10:33.304 Test: blockdev write zeroes read block ...passed 00:10:33.304 Test: blockdev write zeroes read no split ...passed 00:10:33.304 Test: blockdev write zeroes read split ...passed 00:10:33.304 Test: blockdev write zeroes read split partial ...passed 00:10:33.304 Test: blockdev reset ...passed 00:10:33.304 Test: blockdev write read 8 blocks ...passed 00:10:33.304 Test: blockdev write read size > 128k ...passed 00:10:33.304 Test: blockdev write read invalid size ...passed 00:10:33.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.304 Test: blockdev write read max offset ...passed 00:10:33.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.304 Test: blockdev writev readv 8 blocks ...passed 00:10:33.304 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.304 Test: blockdev writev readv block ...passed 00:10:33.304 Test: blockdev writev readv size > 128k ...passed 00:10:33.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.304 Test: blockdev comparev and writev ...passed 00:10:33.304 Test: blockdev nvme passthru rw ...passed 00:10:33.304 Test: blockdev nvme passthru vendor specific ...passed 00:10:33.304 Test: blockdev nvme admin passthru ...passed 00:10:33.304 Test: blockdev copy ...passed 00:10:33.304 Suite: bdevio tests on: Malloc1p0 00:10:33.304 Test: blockdev write read block ...passed 00:10:33.304 Test: blockdev write zeroes read block ...passed 00:10:33.304 Test: blockdev write zeroes read no split ...passed 00:10:33.304 Test: blockdev write zeroes read split ...passed 00:10:33.563 Test: blockdev write zeroes read split partial ...passed 00:10:33.563 Test: blockdev reset ...passed 00:10:33.563 Test: blockdev write read 8 blocks ...passed 00:10:33.563 Test: blockdev write read size > 128k ...passed 00:10:33.563 Test: blockdev write read invalid size ...passed 00:10:33.563 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.563 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.563 Test: blockdev write read max offset ...passed 00:10:33.563 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.563 Test: blockdev writev readv 8 blocks ...passed 00:10:33.563 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.563 Test: blockdev writev readv block ...passed 00:10:33.563 Test: blockdev writev readv size > 128k ...passed 00:10:33.563 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.563 Test: blockdev comparev and writev ...passed 00:10:33.563 Test: blockdev nvme passthru rw ...passed 00:10:33.563 Test: blockdev nvme passthru vendor specific ...passed 00:10:33.563 Test: blockdev nvme admin passthru ...passed 00:10:33.563 Test: blockdev copy ...passed 00:10:33.563 Suite: bdevio tests on: Malloc0 00:10:33.563 Test: blockdev write read block ...passed 00:10:33.563 Test: blockdev write zeroes read block ...passed 00:10:33.563 Test: blockdev write zeroes read no split ...passed 00:10:33.563 Test: blockdev write zeroes read split ...passed 00:10:33.563 Test: blockdev write zeroes read split partial ...passed 00:10:33.563 Test: blockdev reset ...passed 00:10:33.563 Test: blockdev write read 8 blocks ...passed 00:10:33.563 Test: blockdev write read size > 128k ...passed 00:10:33.563 Test: blockdev write read invalid size ...passed 00:10:33.563 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.563 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.563 Test: blockdev write read max offset ...passed 00:10:33.563 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.563 Test: blockdev writev readv 8 blocks ...passed 00:10:33.563 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.563 Test: blockdev writev readv block ...passed 00:10:33.563 Test: blockdev writev readv size > 128k ...passed 00:10:33.563 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.563 Test: blockdev comparev and writev ...passed 00:10:33.563 Test: blockdev nvme passthru rw ...passed 00:10:33.563 Test: blockdev nvme passthru vendor specific ...passed 00:10:33.563 Test: blockdev nvme admin passthru ...passed 00:10:33.563 Test: blockdev copy ...passed 00:10:33.563 00:10:33.563 Run Summary: Type Total Ran Passed Failed Inactive 00:10:33.563 suites 16 16 n/a 0 0 00:10:33.564 tests 368 368 368 0 0 00:10:33.564 asserts 2224 2224 2224 0 n/a 00:10:33.564 00:10:33.564 Elapsed time = 3.230 seconds 00:10:33.564 0 00:10:33.564 04:47:47 -- bdev/blockdev.sh@293 -- # killprocess 44259 00:10:33.564 04:47:47 -- common/autotest_common.sh@926 -- # '[' -z 44259 ']' 00:10:33.564 04:47:47 -- common/autotest_common.sh@930 -- # kill -0 44259 00:10:33.564 04:47:47 -- common/autotest_common.sh@931 -- # uname 00:10:33.564 04:47:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:33.564 04:47:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 44259 00:10:33.564 killing process with pid 44259 00:10:33.564 04:47:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:33.564 04:47:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:33.564 04:47:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 44259' 00:10:33.564 04:47:47 -- common/autotest_common.sh@945 -- # kill 44259 00:10:33.564 04:47:47 -- common/autotest_common.sh@950 -- # wait 44259 00:10:36.099 04:47:49 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:10:36.099 00:10:36.099 real 0m5.954s 00:10:36.099 user 0m15.695s 00:10:36.099 sys 0m0.738s 00:10:36.099 04:47:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.099 04:47:49 -- common/autotest_common.sh@10 -- # set +x 00:10:36.099 ************************************ 00:10:36.099 END TEST bdev_bounds 00:10:36.099 ************************************ 00:10:36.099 04:47:50 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:10:36.099 04:47:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:36.099 04:47:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.099 04:47:50 -- common/autotest_common.sh@10 -- # set +x 00:10:36.099 ************************************ 00:10:36.099 START TEST bdev_nbd 00:10:36.099 ************************************ 00:10:36.099 04:47:50 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@298 -- # uname -s 00:10:36.099 04:47:50 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:10:36.099 04:47:50 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.099 04:47:50 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:36.099 04:47:50 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:10:36.099 04:47:50 -- bdev/blockdev.sh@302 -- # local bdev_all 00:10:36.099 04:47:50 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:10:36.099 04:47:50 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:10:36.099 04:47:50 -- bdev/blockdev.sh@307 -- # modprobe -q nbd nbds_max=16 00:10:36.099 ************************************ 00:10:36.099 END TEST bdev_nbd 00:10:36.099 ************************************ 00:10:36.099 04:47:50 -- bdev/blockdev.sh@307 -- # return 0 00:10:36.099 00:10:36.099 real 0m0.010s 00:10:36.099 user 0m0.004s 00:10:36.099 sys 0m0.007s 00:10:36.099 04:47:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.099 04:47:50 -- common/autotest_common.sh@10 -- # set +x 00:10:36.099 04:47:50 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:10:36.099 04:47:50 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:10:36.099 04:47:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:36.099 04:47:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.099 04:47:50 -- common/autotest_common.sh@10 -- # set +x 00:10:36.099 ************************************ 00:10:36.099 START TEST bdev_fio 00:10:36.099 ************************************ 00:10:36.099 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:10:36.099 04:47:50 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@329 -- # local env_context 00:10:36.099 04:47:50 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:10:36.099 04:47:50 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:10:36.099 04:47:50 -- bdev/blockdev.sh@337 -- # echo '' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:10:36.099 04:47:50 -- bdev/blockdev.sh@337 -- # env_context= 00:10:36.099 04:47:50 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:10:36.099 04:47:50 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:36.099 04:47:50 -- common/autotest_common.sh@1260 -- # local workload=verify 00:10:36.099 04:47:50 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:10:36.099 04:47:50 -- common/autotest_common.sh@1262 -- # local env_context= 00:10:36.099 04:47:50 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:10:36.099 04:47:50 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:10:36.099 04:47:50 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:10:36.099 04:47:50 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:10:36.099 04:47:50 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:36.099 04:47:50 -- common/autotest_common.sh@1280 -- # cat 00:10:36.099 04:47:50 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:10:36.099 04:47:50 -- common/autotest_common.sh@1293 -- # cat 00:10:36.099 04:47:50 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:10:36.099 04:47:50 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:10:36.099 04:47:50 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:10:36.099 04:47:50 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:10:36.099 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:10:36.099 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.099 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:10:36.100 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:10:36.100 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.100 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:10:36.100 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:10:36.100 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.100 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:10:36.100 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:10:36.100 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.100 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:10:36.100 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:10:36.100 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.100 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:10:36.100 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:10:36.100 04:47:50 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:10:36.100 04:47:50 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:10:36.100 04:47:50 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:10:36.100 04:47:50 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:10:36.100 04:47:50 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:36.100 04:47:50 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:36.100 04:47:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.100 04:47:50 -- common/autotest_common.sh@10 -- # set +x 00:10:36.100 ************************************ 00:10:36.100 START TEST bdev_fio_rw_verify 00:10:36.100 ************************************ 00:10:36.100 04:47:50 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:36.100 04:47:50 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:36.100 04:47:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:10:36.100 04:47:50 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:10:36.100 04:47:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:10:36.100 04:47:50 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:10:36.100 04:47:50 -- common/autotest_common.sh@1320 -- # shift 00:10:36.100 04:47:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:10:36.100 04:47:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:10:36.100 04:47:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:10:36.100 04:47:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:10:36.100 04:47:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:10:36.100 04:47:50 -- common/autotest_common.sh@1324 -- # asan_lib=/lib64/libasan.so.6 00:10:36.100 04:47:50 -- common/autotest_common.sh@1325 -- # [[ -n /lib64/libasan.so.6 ]] 00:10:36.100 04:47:50 -- common/autotest_common.sh@1326 -- # break 00:10:36.100 04:47:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:10:36.100 04:47:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:36.359 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:36.359 fio-3.35 00:10:36.359 Starting 16 threads 00:10:51.260 00:10:51.260 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=44450: Wed May 15 04:48:02 2024 00:10:51.260 read: IOPS=114k, BW=444MiB/s (466MB/s)(4446MiB/10009msec) 00:10:51.260 slat (nsec): min=770, max=75435k, avg=10739.70, stdev=175975.93 00:10:51.260 clat (usec): min=3, max=75560, avg=121.95, stdev=621.59 00:10:51.260 lat (usec): min=8, max=75582, avg=132.69, stdev=645.76 00:10:51.260 clat percentiles (usec): 00:10:51.260 | 50.000th=[ 74], 99.000th=[ 750], 99.900th=[11600], 99.990th=[22414], 00:10:51.260 | 99.999th=[42206] 00:10:51.260 write: IOPS=185k, BW=722MiB/s (757MB/s)(7202MiB/9980msec); 0 zone resets 00:10:51.260 slat (usec): min=2, max=141165, avg=53.71, stdev=955.51 00:10:51.260 clat (usec): min=4, max=141369, avg=279.27, stdev=1909.05 00:10:51.260 lat (usec): min=17, max=141381, avg=332.99, stdev=2135.61 00:10:51.260 clat percentiles (usec): 00:10:51.260 | 50.000th=[ 114], 99.000th=[ 4883], 99.900th=[ 27919], 00:10:51.260 | 99.990th=[ 70779], 99.999th=[116917] 00:10:51.260 bw ( KiB/s): min=468118, max=1076412, per=98.73%, avg=729624.79, stdev=10538.94, samples=304 00:10:51.260 iops : min=117025, max=269097, avg=182401.95, stdev=2634.75, samples=304 00:10:51.260 lat (usec) : 4=0.01%, 10=0.04%, 20=0.51%, 50=15.72%, 100=39.52% 00:10:51.260 lat (usec) : 250=39.59%, 500=1.31%, 750=1.87%, 1000=0.42% 00:10:51.260 lat (msec) : 2=0.12%, 4=0.13%, 10=0.28%, 20=0.36%, 50=0.09% 00:10:51.260 lat (msec) : 100=0.01%, 250=0.01% 00:10:51.260 cpu : usr=53.53%, sys=1.24%, ctx=19186, majf=0, minf=127597 00:10:51.260 IO depths : 1=12.4%, 2=24.8%, 4=50.1%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.260 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.260 issued rwts: total=1138252,1843828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.260 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:51.260 00:10:51.260 Run status group 0 (all jobs): 00:10:51.260 READ: bw=444MiB/s (466MB/s), 444MiB/s-444MiB/s (466MB/s-466MB/s), io=4446MiB (4662MB), run=10009-10009msec 00:10:51.260 WRITE: bw=722MiB/s (757MB/s), 722MiB/s-722MiB/s (757MB/s-757MB/s), io=7202MiB (7552MB), run=9980-9980msec 00:10:51.522 ----------------------------------------------------- 00:10:51.522 Suppressions used: 00:10:51.522 count bytes template 00:10:51.522 16 140 /usr/src/fio/parse.c 00:10:51.522 12646 1214016 /usr/src/fio/iolog.c 00:10:51.522 2 596 libcrypto.so 00:10:51.522 ----------------------------------------------------- 00:10:51.522 00:10:51.522 00:10:51.522 real 0m15.382s 00:10:51.522 user 1m38.338s 00:10:51.522 sys 0m2.753s 00:10:51.522 04:48:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.522 ************************************ 00:10:51.522 END TEST bdev_fio_rw_verify 00:10:51.522 ************************************ 00:10:51.522 04:48:05 -- common/autotest_common.sh@10 -- # set +x 00:10:51.522 04:48:05 -- bdev/blockdev.sh@348 -- # rm -f 00:10:51.522 04:48:05 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:51.522 04:48:05 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:10:51.522 04:48:05 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:51.522 04:48:05 -- common/autotest_common.sh@1260 -- # local workload=trim 00:10:51.522 04:48:05 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:10:51.522 04:48:05 -- common/autotest_common.sh@1262 -- # local env_context= 00:10:51.522 04:48:05 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:10:51.522 04:48:05 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:10:51.522 04:48:05 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:10:51.522 04:48:05 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:10:51.522 04:48:05 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:51.522 04:48:05 -- common/autotest_common.sh@1280 -- # cat 00:10:51.522 04:48:05 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:10:51.522 04:48:05 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:10:51.522 04:48:05 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:10:51.522 04:48:05 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:10:51.523 04:48:05 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "c4dadbeb-0e61-46ca-bec5-c2ca8dab92d9"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c4dadbeb-0e61-46ca-bec5-c2ca8dab92d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "dfab3214-9bfd-536a-9108-772a22cf34f4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "dfab3214-9bfd-536a-9108-772a22cf34f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3bb91de0-68d6-527e-82b6-06214792e107"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3bb91de0-68d6-527e-82b6-06214792e107",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "0dbd460d-3431-54cc-8d5e-46557a84b415"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0dbd460d-3431-54cc-8d5e-46557a84b415",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7dbadb9f-acaf-5d1f-b7ad-454f94bfa9cb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7dbadb9f-acaf-5d1f-b7ad-454f94bfa9cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6b2acf00-8d26-55c2-9e17-09c39649779b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b2acf00-8d26-55c2-9e17-09c39649779b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "0ce50045-347b-5202-b28c-72a7ae7649db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0ce50045-347b-5202-b28c-72a7ae7649db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a796cf2f-8671-5865-9b1b-2bcd43f49f05"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a796cf2f-8671-5865-9b1b-2bcd43f49f05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "f92ef545-1b2a-5eaa-af59-ad929102201a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f92ef545-1b2a-5eaa-af59-ad929102201a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a7b2aaba-8558-5857-9970-4caff91a2ff8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7b2aaba-8558-5857-9970-4caff91a2ff8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "9582c19d-7d00-5c2e-8d46-cc0a85192a14"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9582c19d-7d00-5c2e-8d46-cc0a85192a14",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "77d5ceb4-2762-50c1-a305-2f3cd36f8729"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "77d5ceb4-2762-50c1-a305-2f3cd36f8729",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5da9134e-0cab-4237-87ed-a272db271f8a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5da9134e-0cab-4237-87ed-a272db271f8a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5da9134e-0cab-4237-87ed-a272db271f8a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "320baa88-cb19-4316-afb6-60a122497811",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "ca9c360a-b038-440e-8bba-0bee27f55792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "78b98a3f-89b2-45b9-af6e-8fcf3e8859e1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "78b98a3f-89b2-45b9-af6e-8fcf3e8859e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "78b98a3f-89b2-45b9-af6e-8fcf3e8859e1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "46d13fb8-a4c3-45a4-868c-484f131a7054",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "95ae5ecc-8fd9-4b6b-95eb-31aec6f8625d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "96217c7e-f1a1-49d3-9a16-44f7aaa6ad72"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "96217c7e-f1a1-49d3-9a16-44f7aaa6ad72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "96217c7e-f1a1-49d3-9a16-44f7aaa6ad72",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2b9713bd-cbd5-4981-bb91-7d3e6e8f922b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "f76c4071-b4a1-4e7f-8ec6-76d7ffa4d52a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e26c3b75-e897-42b4-877a-6b1d5b930de7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e26c3b75-e897-42b4-877a-6b1d5b930de7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:10:51.523 04:48:05 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:10:51.523 Malloc1p0 00:10:51.523 Malloc1p1 00:10:51.523 Malloc2p0 00:10:51.523 Malloc2p1 00:10:51.523 Malloc2p2 00:10:51.523 Malloc2p3 00:10:51.523 Malloc2p4 00:10:51.523 Malloc2p5 00:10:51.523 Malloc2p6 00:10:51.523 Malloc2p7 00:10:51.523 TestPT 00:10:51.523 raid0 00:10:51.523 concat0 ]] 00:10:51.523 04:48:05 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:10:51.524 04:48:05 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "c4dadbeb-0e61-46ca-bec5-c2ca8dab92d9"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c4dadbeb-0e61-46ca-bec5-c2ca8dab92d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "dfab3214-9bfd-536a-9108-772a22cf34f4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "dfab3214-9bfd-536a-9108-772a22cf34f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3bb91de0-68d6-527e-82b6-06214792e107"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3bb91de0-68d6-527e-82b6-06214792e107",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "0dbd460d-3431-54cc-8d5e-46557a84b415"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0dbd460d-3431-54cc-8d5e-46557a84b415",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7dbadb9f-acaf-5d1f-b7ad-454f94bfa9cb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7dbadb9f-acaf-5d1f-b7ad-454f94bfa9cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6b2acf00-8d26-55c2-9e17-09c39649779b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b2acf00-8d26-55c2-9e17-09c39649779b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "0ce50045-347b-5202-b28c-72a7ae7649db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0ce50045-347b-5202-b28c-72a7ae7649db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a796cf2f-8671-5865-9b1b-2bcd43f49f05"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a796cf2f-8671-5865-9b1b-2bcd43f49f05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "f92ef545-1b2a-5eaa-af59-ad929102201a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f92ef545-1b2a-5eaa-af59-ad929102201a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a7b2aaba-8558-5857-9970-4caff91a2ff8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7b2aaba-8558-5857-9970-4caff91a2ff8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "9582c19d-7d00-5c2e-8d46-cc0a85192a14"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9582c19d-7d00-5c2e-8d46-cc0a85192a14",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "77d5ceb4-2762-50c1-a305-2f3cd36f8729"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "77d5ceb4-2762-50c1-a305-2f3cd36f8729",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5da9134e-0cab-4237-87ed-a272db271f8a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5da9134e-0cab-4237-87ed-a272db271f8a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5da9134e-0cab-4237-87ed-a272db271f8a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "320baa88-cb19-4316-afb6-60a122497811",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "ca9c360a-b038-440e-8bba-0bee27f55792",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "78b98a3f-89b2-45b9-af6e-8fcf3e8859e1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "78b98a3f-89b2-45b9-af6e-8fcf3e8859e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "78b98a3f-89b2-45b9-af6e-8fcf3e8859e1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "46d13fb8-a4c3-45a4-868c-484f131a7054",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "95ae5ecc-8fd9-4b6b-95eb-31aec6f8625d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "96217c7e-f1a1-49d3-9a16-44f7aaa6ad72"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "96217c7e-f1a1-49d3-9a16-44f7aaa6ad72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "96217c7e-f1a1-49d3-9a16-44f7aaa6ad72",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2b9713bd-cbd5-4981-bb91-7d3e6e8f922b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "f76c4071-b4a1-4e7f-8ec6-76d7ffa4d52a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e26c3b75-e897-42b4-877a-6b1d5b930de7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e26c3b75-e897-42b4-877a-6b1d5b930de7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:10:51.784 04:48:05 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:51.784 04:48:05 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:10:51.784 04:48:05 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:10:51.784 04:48:05 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:51.784 04:48:05 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:51.784 04:48:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.784 04:48:05 -- common/autotest_common.sh@10 -- # set +x 00:10:51.784 ************************************ 00:10:51.784 START TEST bdev_fio_trim 00:10:51.784 ************************************ 00:10:51.784 04:48:05 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:51.784 04:48:05 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:51.784 04:48:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:10:51.784 04:48:05 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:10:51.784 04:48:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:10:51.785 04:48:05 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:10:51.785 04:48:05 -- common/autotest_common.sh@1320 -- # shift 00:10:51.785 04:48:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:10:51.785 04:48:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:10:51.785 04:48:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:10:51.785 04:48:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:10:51.785 04:48:05 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:10:51.785 04:48:05 -- common/autotest_common.sh@1324 -- # asan_lib=/lib64/libasan.so.6 00:10:51.785 04:48:05 -- common/autotest_common.sh@1325 -- # [[ -n /lib64/libasan.so.6 ]] 00:10:51.785 04:48:05 -- common/autotest_common.sh@1326 -- # break 00:10:51.785 04:48:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:10:51.785 04:48:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:52.044 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:52.044 fio-3.35 00:10:52.044 Starting 14 threads 00:11:04.252 00:11:04.252 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=44690: Wed May 15 04:48:18 2024 00:11:04.252 write: IOPS=259k, BW=1011MiB/s (1060MB/s)(9.88GiB/10009msec); 0 zone resets 00:11:04.252 slat (nsec): min=778, max=42143k, avg=16947.19, stdev=258337.34 00:11:04.252 clat (usec): min=5, max=49249, avg=156.44, stdev=816.36 00:11:04.252 lat (usec): min=8, max=49272, avg=173.39, stdev=855.81 00:11:04.252 clat percentiles (usec): 00:11:04.252 | 50.000th=[ 93], 99.000th=[ 717], 99.900th=[13173], 99.990th=[22152], 00:11:04.252 | 99.999th=[40109] 00:11:04.252 bw ( KiB/s): min=681365, max=1566270, per=99.55%, avg=1030320.11, stdev=20670.38, samples=266 00:11:04.252 iops : min=170338, max=391568, avg=257575.42, stdev=5167.62, samples=266 00:11:04.252 trim: IOPS=259k, BW=1011MiB/s (1060MB/s)(9.88GiB/10009msec); 0 zone resets 00:11:04.252 slat (nsec): min=1452, max=48225k, avg=12173.30, stdev=216720.92 00:11:04.252 clat (nsec): min=1460, max=49272k, avg=143692.21, stdev=752513.37 00:11:04.252 lat (usec): min=4, max=49285, avg=155.87, stdev=783.04 00:11:04.252 clat percentiles (usec): 00:11:04.253 | 50.000th=[ 101], 99.000th=[ 241], 99.900th=[13173], 99.990th=[22152], 00:11:04.253 | 99.999th=[40109] 00:11:04.253 bw ( KiB/s): min=681365, max=1566332, per=99.55%, avg=1030322.53, stdev=20670.24, samples=266 00:11:04.253 iops : min=170338, max=391586, avg=257576.42, stdev=5167.63, samples=266 00:11:04.253 lat (usec) : 2=0.01%, 4=0.03%, 10=0.31%, 20=0.71%, 50=6.98% 00:11:04.253 lat (usec) : 100=45.61%, 250=44.27%, 500=0.69%, 750=1.00%, 1000=0.07% 00:11:04.253 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.29%, 50=0.02% 00:11:04.253 cpu : usr=71.63%, sys=0.00%, ctx=6250, majf=0, minf=846 00:11:04.253 IO depths : 1=12.2%, 2=24.5%, 4=50.1%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.253 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.253 issued rwts: total=0,2589790,2589793,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.253 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:04.253 00:11:04.253 Run status group 0 (all jobs): 00:11:04.253 WRITE: bw=1011MiB/s (1060MB/s), 1011MiB/s-1011MiB/s (1060MB/s-1060MB/s), io=9.88GiB (10.6GB), run=10009-10009msec 00:11:04.253 TRIM: bw=1011MiB/s (1060MB/s), 1011MiB/s-1011MiB/s (1060MB/s-1060MB/s), io=9.88GiB (10.6GB), run=10009-10009msec 00:11:06.789 ----------------------------------------------------- 00:11:06.789 Suppressions used: 00:11:06.789 count bytes template 00:11:06.789 14 129 /usr/src/fio/parse.c 00:11:06.789 2 596 libcrypto.so 00:11:06.789 ----------------------------------------------------- 00:11:06.789 00:11:06.789 ************************************ 00:11:06.789 END TEST bdev_fio_trim 00:11:06.789 ************************************ 00:11:06.789 00:11:06.789 real 0m14.818s 00:11:06.789 user 1m49.633s 00:11:06.789 sys 0m0.591s 00:11:06.789 04:48:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.789 04:48:20 -- common/autotest_common.sh@10 -- # set +x 00:11:06.789 04:48:20 -- bdev/blockdev.sh@366 -- # rm -f 00:11:06.789 04:48:20 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:06.789 /home/vagrant/spdk_repo/spdk 00:11:06.789 04:48:20 -- bdev/blockdev.sh@368 -- # popd 00:11:06.789 04:48:20 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:11:06.789 00:11:06.789 real 0m30.582s 00:11:06.789 user 3m28.099s 00:11:06.789 sys 0m3.495s 00:11:06.789 04:48:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.789 04:48:20 -- common/autotest_common.sh@10 -- # set +x 00:11:06.789 ************************************ 00:11:06.789 END TEST bdev_fio 00:11:06.789 ************************************ 00:11:06.789 04:48:20 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:06.789 04:48:20 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:06.789 04:48:20 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:11:06.789 04:48:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.789 04:48:20 -- common/autotest_common.sh@10 -- # set +x 00:11:06.789 ************************************ 00:11:06.789 START TEST bdev_verify 00:11:06.789 ************************************ 00:11:06.789 04:48:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:06.789 [2024-05-15 04:48:20.854835] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:06.789 [2024-05-15 04:48:20.855005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44893 ] 00:11:06.789 [2024-05-15 04:48:21.010652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:07.047 [2024-05-15 04:48:21.256804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.047 [2024-05-15 04:48:21.256805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.614 [2024-05-15 04:48:21.823744] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:07.614 [2024-05-15 04:48:21.823826] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:07.614 [2024-05-15 04:48:21.831700] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:07.614 [2024-05-15 04:48:21.831953] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:07.614 [2024-05-15 04:48:21.839744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:07.614 [2024-05-15 04:48:21.839782] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:07.614 [2024-05-15 04:48:21.839813] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:07.873 [2024-05-15 04:48:22.069378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:07.873 [2024-05-15 04:48:22.069471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.873 [2024-05-15 04:48:22.069541] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:11:07.873 [2024-05-15 04:48:22.069565] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.873 [2024-05-15 04:48:22.071629] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.873 [2024-05-15 04:48:22.071673] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:08.467 Running I/O for 5 seconds... 00:11:13.736 00:11:13.736 Latency(us) 00:11:13.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.736 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x1000 00:11:13.736 Malloc0 : 5.07 3901.05 15.24 0.00 0.00 32701.71 881.62 64911.85 00:11:13.736 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x1000 length 0x1000 00:11:13.736 Malloc0 : 5.07 3625.82 14.16 0.00 0.00 35119.46 827.00 92873.87 00:11:13.736 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x800 00:11:13.736 Malloc1p0 : 5.07 2632.90 10.28 0.00 0.00 48434.27 1895.86 59419.31 00:11:13.736 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x800 length 0x800 00:11:13.736 Malloc1p0 : 5.07 2467.75 9.64 0.00 0.00 51594.48 1864.66 57172.36 00:11:13.736 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x800 00:11:13.736 Malloc1p1 : 5.07 2632.67 10.28 0.00 0.00 48404.10 1724.22 57671.68 00:11:13.736 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x800 length 0x800 00:11:13.736 Malloc1p1 : 5.07 2467.49 9.64 0.00 0.00 51558.43 1708.62 55424.73 00:11:13.736 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x200 00:11:13.736 Malloc2p0 : 5.07 2632.47 10.28 0.00 0.00 48378.11 1724.22 55924.05 00:11:13.736 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x200 length 0x200 00:11:13.736 Malloc2p0 : 5.07 2467.24 9.64 0.00 0.00 51534.23 1778.83 53677.10 00:11:13.736 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x200 00:11:13.736 Malloc2p1 : 5.08 2632.26 10.28 0.00 0.00 48347.26 1802.24 53926.77 00:11:13.736 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x200 length 0x200 00:11:13.736 Malloc2p1 : 5.08 2478.48 9.68 0.00 0.00 51381.43 1872.46 51679.82 00:11:13.736 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x200 00:11:13.736 Malloc2p2 : 5.08 2632.03 10.28 0.00 0.00 48316.43 1747.63 52179.14 00:11:13.736 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x200 length 0x200 00:11:13.736 Malloc2p2 : 5.09 2478.30 9.68 0.00 0.00 51350.78 1763.23 49932.19 00:11:13.736 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x200 00:11:13.736 Malloc2p3 : 5.08 2631.85 10.28 0.00 0.00 48288.54 1708.62 50681.17 00:11:13.736 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x200 length 0x200 00:11:13.736 Malloc2p3 : 5.09 2478.15 9.68 0.00 0.00 51321.38 1693.01 48434.22 00:11:13.736 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x200 00:11:13.736 Malloc2p4 : 5.08 2631.66 10.28 0.00 0.00 48259.72 1739.82 48933.55 00:11:13.736 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x200 length 0x200 00:11:13.736 Malloc2p4 : 5.09 2478.01 9.68 0.00 0.00 51290.37 1739.82 46686.60 00:11:13.736 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x200 00:11:13.736 Malloc2p5 : 5.08 2631.48 10.28 0.00 0.00 48228.82 1778.83 47185.92 00:11:13.736 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x200 length 0x200 00:11:13.736 Malloc2p5 : 5.09 2477.86 9.68 0.00 0.00 51254.75 1778.83 44938.97 00:11:13.736 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x200 00:11:13.736 Malloc2p6 : 5.08 2631.29 10.28 0.00 0.00 48198.99 1708.62 45438.29 00:11:13.736 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x200 length 0x200 00:11:13.736 Malloc2p6 : 5.09 2477.73 9.68 0.00 0.00 51227.82 1693.01 43191.34 00:11:13.736 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x200 00:11:13.736 Malloc2p7 : 5.08 2631.11 10.28 0.00 0.00 48173.30 1732.02 43690.67 00:11:13.736 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x200 length 0x200 00:11:13.736 Malloc2p7 : 5.09 2477.59 9.68 0.00 0.00 51204.12 1685.21 41443.72 00:11:13.736 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x1000 00:11:13.736 TestPT : 5.08 2617.93 10.23 0.00 0.00 48379.74 4306.65 43940.33 00:11:13.736 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x1000 length 0x1000 00:11:13.736 TestPT : 5.09 2448.94 9.57 0.00 0.00 51770.25 4369.07 72901.00 00:11:13.736 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x2000 00:11:13.736 raid0 : 5.08 2645.70 10.33 0.00 0.00 47880.39 1833.45 37948.46 00:11:13.736 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x2000 length 0x2000 00:11:13.736 raid0 : 5.09 2477.30 9.68 0.00 0.00 51113.25 1802.24 39196.77 00:11:13.736 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x2000 00:11:13.736 concat0 : 5.08 2645.47 10.33 0.00 0.00 47850.79 1841.25 36700.16 00:11:13.736 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x2000 length 0x2000 00:11:13.736 concat0 : 5.09 2477.16 9.68 0.00 0.00 51082.93 1888.06 39446.43 00:11:13.736 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x1000 00:11:13.736 raid1 : 5.08 2645.24 10.33 0.00 0.00 47816.48 2075.31 36949.82 00:11:13.736 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x1000 length 0x1000 00:11:13.736 raid1 : 5.09 2477.02 9.68 0.00 0.00 51048.58 2122.12 39696.09 00:11:13.736 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x0 length 0x4e2 00:11:13.736 AIO0 : 5.08 2633.91 10.29 0.00 0.00 47976.11 1583.79 38947.11 00:11:13.736 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.736 Verification LBA range: start 0x4e2 length 0x4e2 00:11:13.736 AIO0 : 5.09 2462.57 9.62 0.00 0.00 51299.12 2309.36 39696.09 00:11:13.736 =================================================================================================================== 00:11:13.736 Total : 84126.43 328.62 0.00 0.00 48300.30 827.00 92873.87 00:11:16.266 ************************************ 00:11:16.266 END TEST bdev_verify 00:11:16.266 ************************************ 00:11:16.266 00:11:16.266 real 0m9.782s 00:11:16.266 user 0m17.371s 00:11:16.266 sys 0m0.919s 00:11:16.266 04:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.266 04:48:30 -- common/autotest_common.sh@10 -- # set +x 00:11:16.525 04:48:30 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:16.525 04:48:30 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:11:16.525 04:48:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:16.525 04:48:30 -- common/autotest_common.sh@10 -- # set +x 00:11:16.525 ************************************ 00:11:16.525 START TEST bdev_verify_big_io 00:11:16.525 ************************************ 00:11:16.525 04:48:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:16.525 [2024-05-15 04:48:30.692108] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:16.525 [2024-05-15 04:48:30.692264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45034 ] 00:11:16.784 [2024-05-15 04:48:30.847598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:17.042 [2024-05-15 04:48:31.090357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.042 [2024-05-15 04:48:31.090363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.609 [2024-05-15 04:48:31.677255] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:17.609 [2024-05-15 04:48:31.677348] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:17.609 [2024-05-15 04:48:31.685236] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:17.609 [2024-05-15 04:48:31.685318] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:17.609 [2024-05-15 04:48:31.693264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:17.609 [2024-05-15 04:48:31.693305] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:17.609 [2024-05-15 04:48:31.693351] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:17.867 [2024-05-15 04:48:31.934025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:17.867 [2024-05-15 04:48:31.934117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.867 [2024-05-15 04:48:31.934173] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:11:17.867 [2024-05-15 04:48:31.934197] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.867 [2024-05-15 04:48:31.936228] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.867 [2024-05-15 04:48:31.936263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:18.433 [2024-05-15 04:48:32.366843] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.371191] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.376357] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.381465] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.385844] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.391125] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.395176] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.400435] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.404884] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.409956] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.414346] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.419417] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:11:18.433 [2024-05-15 04:48:32.424866] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:11:18.434 [2024-05-15 04:48:32.429246] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:11:18.434 [2024-05-15 04:48:32.434526] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:11:18.434 [2024-05-15 04:48:32.438779] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:11:18.434 [2024-05-15 04:48:32.549942] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:11:18.434 [2024-05-15 04:48:32.558755] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:11:18.434 Running I/O for 5 seconds... 00:11:25.051 00:11:25.051 Latency(us) 00:11:25.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.051 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x0 length 0x100 00:11:25.051 Malloc0 : 5.30 859.81 53.74 0.00 0.00 146351.05 10236.10 451387.00 00:11:25.051 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x100 length 0x100 00:11:25.051 Malloc0 : 5.30 812.19 50.76 0.00 0.00 154873.83 9861.61 535273.08 00:11:25.051 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x0 length 0x80 00:11:25.051 Malloc1p0 : 5.37 458.02 28.63 0.00 0.00 271587.22 21221.18 547256.81 00:11:25.051 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x80 length 0x80 00:11:25.051 Malloc1p0 : 5.30 570.53 35.66 0.00 0.00 219403.62 20597.03 481346.32 00:11:25.051 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x0 length 0x80 00:11:25.051 Malloc1p1 : 5.43 243.64 15.23 0.00 0.00 506102.10 21970.16 958698.06 00:11:25.051 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x80 length 0x80 00:11:25.051 Malloc1p1 : 5.41 237.87 14.87 0.00 0.00 518874.25 20721.86 998643.81 00:11:25.051 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x0 length 0x20 00:11:25.051 Malloc2p0 : 5.34 153.08 9.57 0.00 0.00 202557.36 3370.42 357514.48 00:11:25.051 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x20 length 0x20 00:11:25.051 Malloc2p0 : 5.33 149.45 9.34 0.00 0.00 207639.44 3994.58 319566.02 00:11:25.051 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x0 length 0x20 00:11:25.051 Malloc2p1 : 5.34 153.06 9.57 0.00 0.00 202148.89 3807.33 349525.33 00:11:25.051 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x20 length 0x20 00:11:25.051 Malloc2p1 : 5.33 149.43 9.34 0.00 0.00 207288.72 4119.41 313574.16 00:11:25.051 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x0 length 0x20 00:11:25.051 Malloc2p2 : 5.34 153.05 9.57 0.00 0.00 201742.01 3464.05 343533.47 00:11:25.051 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x20 length 0x20 00:11:25.051 Malloc2p2 : 5.33 149.41 9.34 0.00 0.00 206839.05 4244.24 305585.01 00:11:25.051 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x0 length 0x20 00:11:25.051 Malloc2p3 : 5.34 153.03 9.56 0.00 0.00 201342.93 4493.90 337541.61 00:11:25.051 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x20 length 0x20 00:11:25.051 Malloc2p3 : 5.33 149.39 9.34 0.00 0.00 206473.75 3464.05 301590.43 00:11:25.051 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x0 length 0x20 00:11:25.051 Malloc2p4 : 5.34 153.02 9.56 0.00 0.00 200935.32 4181.82 331549.74 00:11:25.051 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x20 length 0x20 00:11:25.051 Malloc2p4 : 5.34 149.38 9.34 0.00 0.00 206070.09 3276.80 293601.28 00:11:25.051 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x0 length 0x20 00:11:25.051 Malloc2p5 : 5.34 153.00 9.56 0.00 0.00 200526.96 3776.12 323560.59 00:11:25.051 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:25.051 Verification LBA range: start 0x20 length 0x20 00:11:25.051 Malloc2p5 : 5.34 149.36 9.33 0.00 0.00 205699.89 3822.93 287609.42 00:11:25.052 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x0 length 0x20 00:11:25.052 Malloc2p6 : 5.34 152.99 9.56 0.00 0.00 200157.34 4056.99 317568.73 00:11:25.052 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x20 length 0x20 00:11:25.052 Malloc2p6 : 5.34 149.34 9.33 0.00 0.00 205247.07 4119.41 279620.27 00:11:25.052 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x0 length 0x20 00:11:25.052 Malloc2p7 : 5.34 152.98 9.56 0.00 0.00 199701.14 4431.48 309579.58 00:11:25.052 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x20 length 0x20 00:11:25.052 Malloc2p7 : 5.34 149.33 9.33 0.00 0.00 204887.59 3822.93 273628.40 00:11:25.052 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x0 length 0x100 00:11:25.052 TestPT : 5.45 248.79 15.55 0.00 0.00 484012.74 20846.69 950708.91 00:11:25.052 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x100 length 0x100 00:11:25.052 TestPT : 5.42 231.75 14.48 0.00 0.00 522194.98 26838.55 1006632.96 00:11:25.052 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x0 length 0x200 00:11:25.052 raid0 : 5.45 255.55 15.97 0.00 0.00 468938.01 22094.99 954703.48 00:11:25.052 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x200 length 0x200 00:11:25.052 raid0 : 5.42 243.89 15.24 0.00 0.00 493940.14 20721.86 986660.08 00:11:25.052 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x0 length 0x200 00:11:25.052 concat0 : 5.43 263.18 16.45 0.00 0.00 452778.82 21221.18 962692.63 00:11:25.052 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x200 length 0x200 00:11:25.052 concat0 : 5.43 250.00 15.62 0.00 0.00 478802.06 22219.82 990654.66 00:11:25.052 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x0 length 0x100 00:11:25.052 raid1 : 5.44 276.04 17.25 0.00 0.00 428983.50 12046.14 966687.21 00:11:25.052 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x100 length 0x100 00:11:25.052 raid1 : 5.44 263.09 16.44 0.00 0.00 452285.02 11858.90 998643.81 00:11:25.052 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x0 length 0x4e 00:11:25.052 AIO0 : 5.45 286.37 17.90 0.00 0.00 250334.72 1131.28 559240.53 00:11:25.052 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:11:25.052 Verification LBA range: start 0x4e length 0x4e 00:11:25.052 AIO0 : 5.43 262.01 16.38 0.00 0.00 275155.58 5742.20 571224.26 00:11:25.052 =================================================================================================================== 00:11:25.052 Total : 8182.03 511.38 0.00 0.00 287548.40 1131.28 1006632.96 00:11:27.589 ************************************ 00:11:27.589 END TEST bdev_verify_big_io 00:11:27.589 ************************************ 00:11:27.589 00:11:27.589 real 0m10.883s 00:11:27.589 user 0m19.777s 00:11:27.589 sys 0m0.722s 00:11:27.589 04:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.589 04:48:41 -- common/autotest_common.sh@10 -- # set +x 00:11:27.589 04:48:41 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:27.589 04:48:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:27.589 04:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.589 04:48:41 -- common/autotest_common.sh@10 -- # set +x 00:11:27.589 ************************************ 00:11:27.589 START TEST bdev_write_zeroes 00:11:27.589 ************************************ 00:11:27.589 04:48:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:27.589 [2024-05-15 04:48:41.631523] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:27.589 [2024-05-15 04:48:41.631697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45186 ] 00:11:27.589 [2024-05-15 04:48:41.809967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.849 [2024-05-15 04:48:42.064453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.418 [2024-05-15 04:48:42.630813] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:28.418 [2024-05-15 04:48:42.630900] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:28.418 [2024-05-15 04:48:42.638768] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:28.418 [2024-05-15 04:48:42.638827] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:28.418 [2024-05-15 04:48:42.646806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:28.418 [2024-05-15 04:48:42.646851] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:28.418 [2024-05-15 04:48:42.646879] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:28.678 [2024-05-15 04:48:42.876655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:28.678 [2024-05-15 04:48:42.877055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.678 [2024-05-15 04:48:42.877114] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:11:28.678 [2024-05-15 04:48:42.877144] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.678 [2024-05-15 04:48:42.878909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.678 [2024-05-15 04:48:42.878950] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:29.247 Running I/O for 1 seconds... 00:11:30.185 00:11:30.185 Latency(us) 00:11:30.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:30.185 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc0 : 1.01 16726.76 65.34 0.00 0.00 7649.82 236.98 15478.98 00:11:30.185 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc1p0 : 1.01 16719.74 65.31 0.00 0.00 7647.32 331.58 14917.24 00:11:30.185 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc1p1 : 1.01 16715.61 65.30 0.00 0.00 7643.19 335.48 14542.75 00:11:30.185 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc2p0 : 1.01 16712.07 65.28 0.00 0.00 7637.95 353.04 14230.67 00:11:30.185 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc2p1 : 1.01 16708.56 65.27 0.00 0.00 7634.18 331.58 13918.60 00:11:30.185 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc2p2 : 1.01 16705.08 65.25 0.00 0.00 7629.66 321.83 13606.52 00:11:30.185 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc2p3 : 1.01 16701.23 65.24 0.00 0.00 7625.66 319.88 13356.86 00:11:30.185 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc2p4 : 1.02 16741.22 65.40 0.00 0.00 7602.14 323.78 12982.37 00:11:30.185 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc2p5 : 1.02 16737.27 65.38 0.00 0.00 7599.06 358.89 12670.29 00:11:30.185 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc2p6 : 1.02 16733.51 65.37 0.00 0.00 7594.36 327.68 12295.80 00:11:30.185 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 Malloc2p7 : 1.02 16729.89 65.35 0.00 0.00 7589.30 314.03 11983.73 00:11:30.185 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 TestPT : 1.02 16726.27 65.34 0.00 0.00 7585.68 327.68 11671.65 00:11:30.185 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 raid0 : 1.02 16721.75 65.32 0.00 0.00 7580.20 550.03 11109.91 00:11:30.185 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 concat0 : 1.02 16717.57 65.30 0.00 0.00 7572.85 534.43 10548.18 00:11:30.185 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 raid1 : 1.02 16711.02 65.28 0.00 0.00 7566.79 862.11 9736.78 00:11:30.185 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:30.185 AIO0 : 1.02 16688.63 65.19 0.00 0.00 7561.45 963.54 8925.38 00:11:30.185 =================================================================================================================== 00:11:30.185 Total : 267496.18 1044.91 0.00 0.00 7607.38 236.98 15478.98 00:11:33.476 ************************************ 00:11:33.476 END TEST bdev_write_zeroes 00:11:33.476 ************************************ 00:11:33.476 00:11:33.476 real 0m5.631s 00:11:33.476 user 0m4.782s 00:11:33.476 sys 0m0.635s 00:11:33.476 04:48:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.476 04:48:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.476 04:48:47 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:33.476 04:48:47 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:33.476 04:48:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:33.476 04:48:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.476 ************************************ 00:11:33.476 START TEST bdev_json_nonenclosed 00:11:33.476 ************************************ 00:11:33.476 04:48:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:33.476 [2024-05-15 04:48:47.321014] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:33.476 [2024-05-15 04:48:47.321186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45279 ] 00:11:33.476 [2024-05-15 04:48:47.485069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.736 [2024-05-15 04:48:47.742147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.736 [2024-05-15 04:48:47.742359] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:33.736 [2024-05-15 04:48:47.742396] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:33.996 ************************************ 00:11:33.996 END TEST bdev_json_nonenclosed 00:11:33.996 ************************************ 00:11:33.996 00:11:33.996 real 0m1.033s 00:11:33.996 user 0m0.696s 00:11:33.996 sys 0m0.140s 00:11:33.996 04:48:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.996 04:48:48 -- common/autotest_common.sh@10 -- # set +x 00:11:34.257 04:48:48 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:34.257 04:48:48 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:34.257 04:48:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.257 04:48:48 -- common/autotest_common.sh@10 -- # set +x 00:11:34.257 ************************************ 00:11:34.257 START TEST bdev_json_nonarray 00:11:34.257 ************************************ 00:11:34.257 04:48:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:34.257 [2024-05-15 04:48:48.407073] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:34.257 [2024-05-15 04:48:48.407229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45317 ] 00:11:34.517 [2024-05-15 04:48:48.558519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.776 [2024-05-15 04:48:48.819051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.776 [2024-05-15 04:48:48.819266] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:34.776 [2024-05-15 04:48:48.819304] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:35.344 ************************************ 00:11:35.344 END TEST bdev_json_nonarray 00:11:35.344 ************************************ 00:11:35.344 00:11:35.344 real 0m1.032s 00:11:35.344 user 0m0.704s 00:11:35.344 sys 0m0.132s 00:11:35.344 04:48:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.344 04:48:49 -- common/autotest_common.sh@10 -- # set +x 00:11:35.344 04:48:49 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:11:35.344 04:48:49 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:11:35.344 04:48:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:35.344 04:48:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:35.344 04:48:49 -- common/autotest_common.sh@10 -- # set +x 00:11:35.344 ************************************ 00:11:35.344 START TEST bdev_qos 00:11:35.344 ************************************ 00:11:35.344 04:48:49 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:11:35.344 Process qos testing pid: 45355 00:11:35.344 04:48:49 -- bdev/blockdev.sh@444 -- # QOS_PID=45355 00:11:35.344 04:48:49 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 45355' 00:11:35.344 04:48:49 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:11:35.344 04:48:49 -- bdev/blockdev.sh@447 -- # waitforlisten 45355 00:11:35.344 04:48:49 -- common/autotest_common.sh@819 -- # '[' -z 45355 ']' 00:11:35.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.344 04:48:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.344 04:48:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:35.344 04:48:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.344 04:48:49 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:11:35.344 04:48:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:35.344 04:48:49 -- common/autotest_common.sh@10 -- # set +x 00:11:35.344 [2024-05-15 04:48:49.495045] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:35.345 [2024-05-15 04:48:49.495226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45355 ] 00:11:35.603 [2024-05-15 04:48:49.652269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.862 [2024-05-15 04:48:49.959230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.799 04:48:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:36.799 04:48:51 -- common/autotest_common.sh@852 -- # return 0 00:11:36.799 04:48:51 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:11:36.799 04:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.799 04:48:51 -- common/autotest_common.sh@10 -- # set +x 00:11:37.058 Malloc_0 00:11:37.058 04:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.058 04:48:51 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:11:37.058 04:48:51 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:11:37.058 04:48:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:37.058 04:48:51 -- common/autotest_common.sh@889 -- # local i 00:11:37.058 04:48:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:37.058 04:48:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:37.058 04:48:51 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:11:37.058 04:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.058 04:48:51 -- common/autotest_common.sh@10 -- # set +x 00:11:37.058 04:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.058 04:48:51 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:11:37.058 04:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.058 04:48:51 -- common/autotest_common.sh@10 -- # set +x 00:11:37.058 [ 00:11:37.058 { 00:11:37.058 "name": "Malloc_0", 00:11:37.058 "aliases": [ 00:11:37.058 "44f840d6-7cd6-4722-8dc3-a011421e3c8c" 00:11:37.058 ], 00:11:37.058 "product_name": "Malloc disk", 00:11:37.058 "block_size": 512, 00:11:37.058 "num_blocks": 262144, 00:11:37.058 "uuid": "44f840d6-7cd6-4722-8dc3-a011421e3c8c", 00:11:37.058 "assigned_rate_limits": { 00:11:37.058 "rw_ios_per_sec": 0, 00:11:37.058 "rw_mbytes_per_sec": 0, 00:11:37.058 "r_mbytes_per_sec": 0, 00:11:37.058 "w_mbytes_per_sec": 0 00:11:37.058 }, 00:11:37.058 "claimed": false, 00:11:37.058 "zoned": false, 00:11:37.058 "supported_io_types": { 00:11:37.058 "read": true, 00:11:37.058 "write": true, 00:11:37.058 "unmap": true, 00:11:37.058 "write_zeroes": true, 00:11:37.058 "flush": true, 00:11:37.058 "reset": true, 00:11:37.058 "compare": false, 00:11:37.058 "compare_and_write": false, 00:11:37.058 "abort": true, 00:11:37.058 "nvme_admin": false, 00:11:37.058 "nvme_io": false 00:11:37.058 }, 00:11:37.058 "memory_domains": [ 00:11:37.058 { 00:11:37.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.058 "dma_device_type": 2 00:11:37.058 } 00:11:37.058 ], 00:11:37.058 "driver_specific": {} 00:11:37.058 } 00:11:37.058 ] 00:11:37.058 04:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.058 04:48:51 -- common/autotest_common.sh@895 -- # return 0 00:11:37.058 04:48:51 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:11:37.058 04:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.058 04:48:51 -- common/autotest_common.sh@10 -- # set +x 00:11:37.058 Null_1 00:11:37.058 04:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.058 04:48:51 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:11:37.058 04:48:51 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:11:37.058 04:48:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:37.058 04:48:51 -- common/autotest_common.sh@889 -- # local i 00:11:37.058 04:48:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:37.058 04:48:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:37.058 04:48:51 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:11:37.058 04:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.058 04:48:51 -- common/autotest_common.sh@10 -- # set +x 00:11:37.058 04:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.058 04:48:51 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:11:37.058 04:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:37.058 04:48:51 -- common/autotest_common.sh@10 -- # set +x 00:11:37.058 [ 00:11:37.058 { 00:11:37.058 "name": "Null_1", 00:11:37.058 "aliases": [ 00:11:37.058 "e5b54d6b-f6af-462a-890c-8ed8d56e164c" 00:11:37.058 ], 00:11:37.058 "product_name": "Null disk", 00:11:37.058 "block_size": 512, 00:11:37.058 "num_blocks": 262144, 00:11:37.058 "uuid": "e5b54d6b-f6af-462a-890c-8ed8d56e164c", 00:11:37.058 "assigned_rate_limits": { 00:11:37.058 "rw_ios_per_sec": 0, 00:11:37.058 "rw_mbytes_per_sec": 0, 00:11:37.058 "r_mbytes_per_sec": 0, 00:11:37.058 "w_mbytes_per_sec": 0 00:11:37.058 }, 00:11:37.058 "claimed": false, 00:11:37.058 "zoned": false, 00:11:37.058 "supported_io_types": { 00:11:37.058 "read": true, 00:11:37.058 "write": true, 00:11:37.058 "unmap": false, 00:11:37.058 "write_zeroes": true, 00:11:37.058 "flush": false, 00:11:37.058 "reset": true, 00:11:37.058 "compare": false, 00:11:37.058 "compare_and_write": false, 00:11:37.058 "abort": true, 00:11:37.058 "nvme_admin": false, 00:11:37.058 "nvme_io": false 00:11:37.058 }, 00:11:37.058 "driver_specific": {} 00:11:37.058 } 00:11:37.058 ] 00:11:37.058 04:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:37.058 04:48:51 -- common/autotest_common.sh@895 -- # return 0 00:11:37.058 04:48:51 -- bdev/blockdev.sh@455 -- # qos_function_test 00:11:37.058 04:48:51 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:11:37.058 04:48:51 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:11:37.058 04:48:51 -- bdev/blockdev.sh@410 -- # local io_result=0 00:11:37.058 04:48:51 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:11:37.058 04:48:51 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:11:37.058 04:48:51 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:37.058 04:48:51 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:11:37.058 04:48:51 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:11:37.058 04:48:51 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:11:37.058 04:48:51 -- bdev/blockdev.sh@375 -- # local iostat_result 00:11:37.058 04:48:51 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:37.058 04:48:51 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:11:37.058 04:48:51 -- bdev/blockdev.sh@376 -- # tail -1 00:11:37.317 Running I/O for 60 seconds... 00:11:42.584 04:48:56 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 231263.25 925053.01 0.00 0.00 933888.00 0.00 0.00 ' 00:11:42.584 04:48:56 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:11:42.584 04:48:56 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:11:42.584 04:48:56 -- bdev/blockdev.sh@378 -- # iostat_result=231263.25 00:11:42.584 04:48:56 -- bdev/blockdev.sh@383 -- # echo 231263 00:11:42.584 04:48:56 -- bdev/blockdev.sh@414 -- # io_result=231263 00:11:42.584 04:48:56 -- bdev/blockdev.sh@416 -- # iops_limit=57000 00:11:42.584 04:48:56 -- bdev/blockdev.sh@417 -- # '[' 57000 -gt 1000 ']' 00:11:42.584 04:48:56 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 57000 Malloc_0 00:11:42.584 04:48:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.584 04:48:56 -- common/autotest_common.sh@10 -- # set +x 00:11:42.584 04:48:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.584 04:48:56 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 57000 IOPS Malloc_0 00:11:42.584 04:48:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:42.584 04:48:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:42.584 04:48:56 -- common/autotest_common.sh@10 -- # set +x 00:11:42.584 ************************************ 00:11:42.584 START TEST bdev_qos_iops 00:11:42.584 ************************************ 00:11:42.584 04:48:56 -- common/autotest_common.sh@1104 -- # run_qos_test 57000 IOPS Malloc_0 00:11:42.584 04:48:56 -- bdev/blockdev.sh@387 -- # local qos_limit=57000 00:11:42.584 04:48:56 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:11:42.584 04:48:56 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:11:42.584 04:48:56 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:11:42.584 04:48:56 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:11:42.584 04:48:56 -- bdev/blockdev.sh@375 -- # local iostat_result 00:11:42.584 04:48:56 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:42.584 04:48:56 -- bdev/blockdev.sh@376 -- # tail -1 00:11:42.584 04:48:56 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:11:47.864 04:49:01 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 56980.83 227923.32 0.00 0.00 229824.00 0.00 0.00 ' 00:11:47.864 04:49:01 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:11:47.864 04:49:01 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:11:47.864 04:49:01 -- bdev/blockdev.sh@378 -- # iostat_result=56980.83 00:11:47.864 04:49:01 -- bdev/blockdev.sh@383 -- # echo 56980 00:11:47.864 ************************************ 00:11:47.864 END TEST bdev_qos_iops 00:11:47.864 ************************************ 00:11:47.864 04:49:01 -- bdev/blockdev.sh@390 -- # qos_result=56980 00:11:47.864 04:49:01 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:11:47.864 04:49:01 -- bdev/blockdev.sh@394 -- # lower_limit=51300 00:11:47.864 04:49:01 -- bdev/blockdev.sh@395 -- # upper_limit=62700 00:11:47.864 04:49:01 -- bdev/blockdev.sh@398 -- # '[' 56980 -lt 51300 ']' 00:11:47.864 04:49:01 -- bdev/blockdev.sh@398 -- # '[' 56980 -gt 62700 ']' 00:11:47.864 00:11:47.864 real 0m5.189s 00:11:47.864 user 0m0.110s 00:11:47.864 sys 0m0.034s 00:11:47.864 04:49:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.864 04:49:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.864 04:49:01 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:11:47.864 04:49:01 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:11:47.864 04:49:01 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:11:47.864 04:49:01 -- bdev/blockdev.sh@375 -- # local iostat_result 00:11:47.864 04:49:01 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:47.864 04:49:01 -- bdev/blockdev.sh@376 -- # grep Null_1 00:11:47.864 04:49:01 -- bdev/blockdev.sh@376 -- # tail -1 00:11:53.152 04:49:06 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 63623.17 254492.66 0.00 0.00 258048.00 0.00 0.00 ' 00:11:53.152 04:49:06 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:11:53.152 04:49:06 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:53.152 04:49:06 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:11:53.152 04:49:06 -- bdev/blockdev.sh@380 -- # iostat_result=258048.00 00:11:53.152 04:49:06 -- bdev/blockdev.sh@383 -- # echo 258048 00:11:53.152 04:49:06 -- bdev/blockdev.sh@425 -- # bw_limit=258048 00:11:53.152 04:49:06 -- bdev/blockdev.sh@426 -- # bw_limit=25 00:11:53.152 04:49:06 -- bdev/blockdev.sh@427 -- # '[' 25 -lt 2 ']' 00:11:53.152 04:49:06 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 25 Null_1 00:11:53.152 04:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.152 04:49:06 -- common/autotest_common.sh@10 -- # set +x 00:11:53.152 04:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.152 04:49:06 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 25 BANDWIDTH Null_1 00:11:53.152 04:49:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:53.152 04:49:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:53.152 04:49:06 -- common/autotest_common.sh@10 -- # set +x 00:11:53.152 ************************************ 00:11:53.152 START TEST bdev_qos_bw 00:11:53.152 ************************************ 00:11:53.152 04:49:06 -- common/autotest_common.sh@1104 -- # run_qos_test 25 BANDWIDTH Null_1 00:11:53.152 04:49:06 -- bdev/blockdev.sh@387 -- # local qos_limit=25 00:11:53.152 04:49:06 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:11:53.152 04:49:06 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:11:53.152 04:49:06 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:11:53.152 04:49:06 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:11:53.152 04:49:06 -- bdev/blockdev.sh@375 -- # local iostat_result 00:11:53.152 04:49:06 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:53.152 04:49:06 -- bdev/blockdev.sh@376 -- # grep Null_1 00:11:53.152 04:49:06 -- bdev/blockdev.sh@376 -- # tail -1 00:11:58.423 04:49:12 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 6397.30 25589.22 0.00 0.00 25804.00 0.00 0.00 ' 00:11:58.423 04:49:12 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:11:58.423 04:49:12 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:58.423 04:49:12 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:11:58.423 04:49:12 -- bdev/blockdev.sh@380 -- # iostat_result=25804.00 00:11:58.423 04:49:12 -- bdev/blockdev.sh@383 -- # echo 25804 00:11:58.423 ************************************ 00:11:58.423 END TEST bdev_qos_bw 00:11:58.423 ************************************ 00:11:58.423 04:49:12 -- bdev/blockdev.sh@390 -- # qos_result=25804 00:11:58.423 04:49:12 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:58.423 04:49:12 -- bdev/blockdev.sh@392 -- # qos_limit=25600 00:11:58.423 04:49:12 -- bdev/blockdev.sh@394 -- # lower_limit=23040 00:11:58.423 04:49:12 -- bdev/blockdev.sh@395 -- # upper_limit=28160 00:11:58.423 04:49:12 -- bdev/blockdev.sh@398 -- # '[' 25804 -lt 23040 ']' 00:11:58.423 04:49:12 -- bdev/blockdev.sh@398 -- # '[' 25804 -gt 28160 ']' 00:11:58.423 00:11:58.423 real 0m5.155s 00:11:58.423 user 0m0.089s 00:11:58.423 sys 0m0.026s 00:11:58.423 04:49:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.423 04:49:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.423 04:49:12 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:11:58.423 04:49:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.423 04:49:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.423 04:49:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.423 04:49:12 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:11:58.423 04:49:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:58.423 04:49:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:58.423 04:49:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.423 ************************************ 00:11:58.423 START TEST bdev_qos_ro_bw 00:11:58.423 ************************************ 00:11:58.423 04:49:12 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:11:58.423 04:49:12 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:11:58.423 04:49:12 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:11:58.423 04:49:12 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:11:58.423 04:49:12 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:11:58.423 04:49:12 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:11:58.423 04:49:12 -- bdev/blockdev.sh@375 -- # local iostat_result 00:11:58.423 04:49:12 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:58.423 04:49:12 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:11:58.423 04:49:12 -- bdev/blockdev.sh@376 -- # tail -1 00:12:03.696 04:49:17 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 512.39 2049.54 0.00 0.00 2068.00 0.00 0.00 ' 00:12:03.696 04:49:17 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:03.696 04:49:17 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:03.696 04:49:17 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:03.696 04:49:17 -- bdev/blockdev.sh@380 -- # iostat_result=2068.00 00:12:03.696 04:49:17 -- bdev/blockdev.sh@383 -- # echo 2068 00:12:03.696 ************************************ 00:12:03.696 END TEST bdev_qos_ro_bw 00:12:03.696 ************************************ 00:12:03.696 04:49:17 -- bdev/blockdev.sh@390 -- # qos_result=2068 00:12:03.696 04:49:17 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:03.696 04:49:17 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:12:03.696 04:49:17 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:12:03.696 04:49:17 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:12:03.696 04:49:17 -- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']' 00:12:03.696 04:49:17 -- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']' 00:12:03.696 00:12:03.696 real 0m5.179s 00:12:03.696 user 0m0.111s 00:12:03.696 sys 0m0.034s 00:12:03.696 04:49:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.696 04:49:17 -- common/autotest_common.sh@10 -- # set +x 00:12:03.696 04:49:17 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:12:03.696 04:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.696 04:49:17 -- common/autotest_common.sh@10 -- # set +x 00:12:03.955 04:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.955 04:49:18 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:12:03.955 04:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.955 04:49:18 -- common/autotest_common.sh@10 -- # set +x 00:12:03.955 00:12:03.955 Latency(us) 00:12:03.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.955 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:03.955 Malloc_0 : 26.49 78011.37 304.73 0.00 0.00 3250.42 1022.05 507311.06 00:12:03.955 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:03.955 Null_1 : 26.68 71381.69 278.83 0.00 0.00 3583.68 236.01 185747.75 00:12:03.955 =================================================================================================================== 00:12:03.955 Total : 149393.07 583.57 0.00 0.00 3410.24 236.01 507311.06 00:12:03.955 0 00:12:03.955 04:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.955 04:49:18 -- bdev/blockdev.sh@459 -- # killprocess 45355 00:12:03.955 04:49:18 -- common/autotest_common.sh@926 -- # '[' -z 45355 ']' 00:12:03.955 04:49:18 -- common/autotest_common.sh@930 -- # kill -0 45355 00:12:03.955 04:49:18 -- common/autotest_common.sh@931 -- # uname 00:12:03.955 04:49:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:03.955 04:49:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 45355 00:12:03.955 04:49:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:03.955 04:49:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:03.955 killing process with pid 45355 00:12:03.955 Received shutdown signal, test time was about 26.717831 seconds 00:12:03.955 00:12:03.955 Latency(us) 00:12:03.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.955 =================================================================================================================== 00:12:03.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:03.955 04:49:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45355' 00:12:03.955 04:49:18 -- common/autotest_common.sh@945 -- # kill 45355 00:12:03.955 04:49:18 -- common/autotest_common.sh@950 -- # wait 45355 00:12:05.863 ************************************ 00:12:05.863 END TEST bdev_qos 00:12:05.863 ************************************ 00:12:05.863 04:49:19 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:12:05.863 00:12:05.863 real 0m30.582s 00:12:05.863 user 0m31.228s 00:12:05.863 sys 0m0.797s 00:12:05.863 04:49:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.863 04:49:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.863 04:49:19 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:12:05.863 04:49:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:05.863 04:49:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:05.863 04:49:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.863 ************************************ 00:12:05.863 START TEST bdev_qd_sampling 00:12:05.863 ************************************ 00:12:05.863 04:49:19 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:12:05.863 04:49:19 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:12:05.863 Process bdev QD sampling period testing pid: 45853 00:12:05.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.863 04:49:19 -- bdev/blockdev.sh@539 -- # QD_PID=45853 00:12:05.863 04:49:19 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 45853' 00:12:05.863 04:49:19 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:12:05.863 04:49:19 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:12:05.863 04:49:19 -- bdev/blockdev.sh@542 -- # waitforlisten 45853 00:12:05.863 04:49:19 -- common/autotest_common.sh@819 -- # '[' -z 45853 ']' 00:12:05.863 04:49:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.863 04:49:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:05.863 04:49:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.863 04:49:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:05.863 04:49:19 -- common/autotest_common.sh@10 -- # set +x 00:12:06.122 [2024-05-15 04:49:20.137005] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:06.122 [2024-05-15 04:49:20.137188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45853 ] 00:12:06.122 [2024-05-15 04:49:20.302346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:06.378 [2024-05-15 04:49:20.547102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.378 [2024-05-15 04:49:20.547102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.755 04:49:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:07.755 04:49:21 -- common/autotest_common.sh@852 -- # return 0 00:12:07.755 04:49:21 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:12:07.755 04:49:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:07.755 04:49:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.755 Malloc_QD 00:12:07.755 04:49:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:07.755 04:49:21 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:12:07.755 04:49:21 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:12:07.755 04:49:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:07.755 04:49:21 -- common/autotest_common.sh@889 -- # local i 00:12:07.755 04:49:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:07.755 04:49:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:07.755 04:49:21 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:07.755 04:49:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:07.755 04:49:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.755 04:49:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:07.755 04:49:21 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:12:07.755 04:49:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:07.755 04:49:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.755 [ 00:12:07.755 { 00:12:07.755 "name": "Malloc_QD", 00:12:07.755 "aliases": [ 00:12:07.755 "816b275e-fb2c-4164-89e7-df47b98b3885" 00:12:07.755 ], 00:12:07.755 "product_name": "Malloc disk", 00:12:07.755 "block_size": 512, 00:12:07.755 "num_blocks": 262144, 00:12:07.755 "uuid": "816b275e-fb2c-4164-89e7-df47b98b3885", 00:12:07.755 "assigned_rate_limits": { 00:12:07.755 "rw_ios_per_sec": 0, 00:12:07.755 "rw_mbytes_per_sec": 0, 00:12:07.755 "r_mbytes_per_sec": 0, 00:12:07.755 "w_mbytes_per_sec": 0 00:12:07.755 }, 00:12:07.755 "claimed": false, 00:12:07.755 "zoned": false, 00:12:07.755 "supported_io_types": { 00:12:07.755 "read": true, 00:12:07.755 "write": true, 00:12:07.755 "unmap": true, 00:12:07.755 "write_zeroes": true, 00:12:07.755 "flush": true, 00:12:07.755 "reset": true, 00:12:07.755 "compare": false, 00:12:07.755 "compare_and_write": false, 00:12:07.755 "abort": true, 00:12:07.755 "nvme_admin": false, 00:12:07.755 "nvme_io": false 00:12:07.755 }, 00:12:07.755 "memory_domains": [ 00:12:07.755 { 00:12:07.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.755 "dma_device_type": 2 00:12:07.755 } 00:12:07.755 ], 00:12:07.755 "driver_specific": {} 00:12:07.755 } 00:12:07.755 ] 00:12:07.755 04:49:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:07.755 04:49:21 -- common/autotest_common.sh@895 -- # return 0 00:12:07.755 04:49:21 -- bdev/blockdev.sh@548 -- # sleep 2 00:12:07.755 04:49:21 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:07.755 Running I/O for 5 seconds... 00:12:09.660 04:49:23 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:12:09.660 04:49:23 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:12:09.660 04:49:23 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:12:09.660 04:49:23 -- bdev/blockdev.sh@519 -- # local iostats 00:12:09.660 04:49:23 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:12:09.660 04:49:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.660 04:49:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 04:49:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.660 04:49:23 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:12:09.660 04:49:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.660 04:49:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 04:49:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.660 04:49:23 -- bdev/blockdev.sh@523 -- # iostats='{ 00:12:09.660 "tick_rate": 2100000000, 00:12:09.660 "ticks": 1348450970736, 00:12:09.660 "bdevs": [ 00:12:09.660 { 00:12:09.660 "name": "Malloc_QD", 00:12:09.660 "bytes_read": 2128646656, 00:12:09.660 "num_read_ops": 519683, 00:12:09.660 "bytes_written": 0, 00:12:09.660 "num_write_ops": 0, 00:12:09.660 "bytes_unmapped": 0, 00:12:09.660 "num_unmap_ops": 0, 00:12:09.660 "bytes_copied": 0, 00:12:09.660 "num_copy_ops": 0, 00:12:09.660 "read_latency_ticks": 2045576975982, 00:12:09.660 "max_read_latency_ticks": 4671698, 00:12:09.660 "min_read_latency_ticks": 262640, 00:12:09.660 "write_latency_ticks": 0, 00:12:09.660 "max_write_latency_ticks": 0, 00:12:09.660 "min_write_latency_ticks": 0, 00:12:09.660 "unmap_latency_ticks": 0, 00:12:09.660 "max_unmap_latency_ticks": 0, 00:12:09.660 "min_unmap_latency_ticks": 0, 00:12:09.660 "copy_latency_ticks": 0, 00:12:09.660 "max_copy_latency_ticks": 0, 00:12:09.660 "min_copy_latency_ticks": 0, 00:12:09.660 "io_error": {}, 00:12:09.660 "queue_depth_polling_period": 10, 00:12:09.660 "queue_depth": 512, 00:12:09.660 "io_time": 70, 00:12:09.660 "weighted_io_time": 35840 00:12:09.660 } 00:12:09.660 ] 00:12:09.660 }' 00:12:09.918 04:49:23 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:12:09.918 04:49:23 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:12:09.918 04:49:23 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:12:09.918 04:49:23 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:12:09.918 04:49:23 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:12:09.918 04:49:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:09.918 04:49:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.918 00:12:09.918 Latency(us) 00:12:09.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.918 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:09.918 Malloc_QD : 1.99 133154.55 520.13 0.00 0.00 1919.80 468.11 3417.23 00:12:09.919 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:09.919 Malloc_QD : 1.99 139671.88 545.59 0.00 0.00 1830.33 308.18 1989.49 00:12:09.919 =================================================================================================================== 00:12:09.919 Total : 272826.43 1065.73 0.00 0.00 1873.99 308.18 3417.23 00:12:09.919 0 00:12:09.919 04:49:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:09.919 04:49:24 -- bdev/blockdev.sh@552 -- # killprocess 45853 00:12:09.919 04:49:24 -- common/autotest_common.sh@926 -- # '[' -z 45853 ']' 00:12:09.919 04:49:24 -- common/autotest_common.sh@930 -- # kill -0 45853 00:12:09.919 04:49:24 -- common/autotest_common.sh@931 -- # uname 00:12:09.919 04:49:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:09.919 04:49:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 45853 00:12:09.919 killing process with pid 45853 00:12:09.919 Received shutdown signal, test time was about 2.168230 seconds 00:12:09.919 00:12:09.919 Latency(us) 00:12:09.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.919 =================================================================================================================== 00:12:09.919 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:09.919 04:49:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:09.919 04:49:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:09.919 04:49:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45853' 00:12:09.919 04:49:24 -- common/autotest_common.sh@945 -- # kill 45853 00:12:09.919 04:49:24 -- common/autotest_common.sh@950 -- # wait 45853 00:12:11.824 ************************************ 00:12:11.824 END TEST bdev_qd_sampling 00:12:11.824 ************************************ 00:12:11.824 04:49:25 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:12:11.824 00:12:11.824 real 0m5.993s 00:12:11.824 user 0m11.061s 00:12:11.824 sys 0m0.535s 00:12:11.824 04:49:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.824 04:49:25 -- common/autotest_common.sh@10 -- # set +x 00:12:11.824 04:49:26 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:12:11.824 04:49:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:11.824 04:49:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:11.824 04:49:26 -- common/autotest_common.sh@10 -- # set +x 00:12:11.824 ************************************ 00:12:11.824 START TEST bdev_error 00:12:11.824 ************************************ 00:12:11.824 04:49:26 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:12:11.824 04:49:26 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:12:11.824 04:49:26 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:12:11.824 04:49:26 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:12:11.824 Process error testing pid: 45971 00:12:11.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.824 04:49:26 -- bdev/blockdev.sh@470 -- # ERR_PID=45971 00:12:11.824 04:49:26 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 45971' 00:12:11.824 04:49:26 -- bdev/blockdev.sh@472 -- # waitforlisten 45971 00:12:11.824 04:49:26 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:12:11.824 04:49:26 -- common/autotest_common.sh@819 -- # '[' -z 45971 ']' 00:12:11.824 04:49:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.824 04:49:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:11.824 04:49:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.824 04:49:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:11.824 04:49:26 -- common/autotest_common.sh@10 -- # set +x 00:12:12.084 [2024-05-15 04:49:26.192069] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:12.084 [2024-05-15 04:49:26.192257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45971 ] 00:12:12.343 [2024-05-15 04:49:26.349908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.602 [2024-05-15 04:49:26.596089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.539 04:49:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:13.539 04:49:27 -- common/autotest_common.sh@852 -- # return 0 00:12:13.539 04:49:27 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:13.539 04:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.539 04:49:27 -- common/autotest_common.sh@10 -- # set +x 00:12:13.798 Dev_1 00:12:13.798 04:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.798 04:49:27 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:12:13.798 04:49:27 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:12:13.798 04:49:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:13.798 04:49:27 -- common/autotest_common.sh@889 -- # local i 00:12:13.798 04:49:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:13.798 04:49:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:13.798 04:49:27 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:13.798 04:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.798 04:49:27 -- common/autotest_common.sh@10 -- # set +x 00:12:13.798 04:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.798 04:49:27 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:13.798 04:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.798 04:49:27 -- common/autotest_common.sh@10 -- # set +x 00:12:13.798 [ 00:12:13.798 { 00:12:13.798 "name": "Dev_1", 00:12:13.798 "aliases": [ 00:12:13.798 "088ba35c-cecd-43b1-a474-8067455c847b" 00:12:13.798 ], 00:12:13.798 "product_name": "Malloc disk", 00:12:13.798 "block_size": 512, 00:12:13.798 "num_blocks": 262144, 00:12:13.798 "uuid": "088ba35c-cecd-43b1-a474-8067455c847b", 00:12:13.798 "assigned_rate_limits": { 00:12:13.798 "rw_ios_per_sec": 0, 00:12:13.798 "rw_mbytes_per_sec": 0, 00:12:13.798 "r_mbytes_per_sec": 0, 00:12:13.798 "w_mbytes_per_sec": 0 00:12:13.798 }, 00:12:13.798 "claimed": false, 00:12:13.798 "zoned": false, 00:12:13.798 "supported_io_types": { 00:12:13.798 "read": true, 00:12:13.798 "write": true, 00:12:13.798 "unmap": true, 00:12:13.798 "write_zeroes": true, 00:12:13.798 "flush": true, 00:12:13.798 "reset": true, 00:12:13.798 "compare": false, 00:12:13.798 "compare_and_write": false, 00:12:13.798 "abort": true, 00:12:13.798 "nvme_admin": false, 00:12:13.798 "nvme_io": false 00:12:13.798 }, 00:12:13.798 "memory_domains": [ 00:12:13.798 { 00:12:13.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.798 "dma_device_type": 2 00:12:13.798 } 00:12:13.798 ], 00:12:13.798 "driver_specific": {} 00:12:13.798 } 00:12:13.798 ] 00:12:13.798 04:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.798 04:49:27 -- common/autotest_common.sh@895 -- # return 0 00:12:13.798 04:49:27 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:12:13.798 04:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.798 04:49:27 -- common/autotest_common.sh@10 -- # set +x 00:12:13.798 true 00:12:13.798 04:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.798 04:49:27 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:13.798 04:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.798 04:49:27 -- common/autotest_common.sh@10 -- # set +x 00:12:14.057 Dev_2 00:12:14.057 04:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.057 04:49:28 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:12:14.057 04:49:28 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:12:14.057 04:49:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:14.057 04:49:28 -- common/autotest_common.sh@889 -- # local i 00:12:14.057 04:49:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:14.057 04:49:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:14.057 04:49:28 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:14.058 04:49:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.058 04:49:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.058 04:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.058 04:49:28 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:14.058 04:49:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.058 04:49:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.058 [ 00:12:14.058 { 00:12:14.058 "name": "Dev_2", 00:12:14.058 "aliases": [ 00:12:14.058 "684f89c3-fbe7-4d38-b242-c437f97af429" 00:12:14.058 ], 00:12:14.058 "product_name": "Malloc disk", 00:12:14.058 "block_size": 512, 00:12:14.058 "num_blocks": 262144, 00:12:14.058 "uuid": "684f89c3-fbe7-4d38-b242-c437f97af429", 00:12:14.058 "assigned_rate_limits": { 00:12:14.058 "rw_ios_per_sec": 0, 00:12:14.058 "rw_mbytes_per_sec": 0, 00:12:14.058 "r_mbytes_per_sec": 0, 00:12:14.058 "w_mbytes_per_sec": 0 00:12:14.058 }, 00:12:14.058 "claimed": false, 00:12:14.058 "zoned": false, 00:12:14.058 "supported_io_types": { 00:12:14.058 "read": true, 00:12:14.058 "write": true, 00:12:14.058 "unmap": true, 00:12:14.058 "write_zeroes": true, 00:12:14.058 "flush": true, 00:12:14.058 "reset": true, 00:12:14.058 "compare": false, 00:12:14.058 "compare_and_write": false, 00:12:14.058 "abort": true, 00:12:14.058 "nvme_admin": false, 00:12:14.058 "nvme_io": false 00:12:14.058 }, 00:12:14.058 "memory_domains": [ 00:12:14.058 { 00:12:14.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.058 "dma_device_type": 2 00:12:14.058 } 00:12:14.058 ], 00:12:14.058 "driver_specific": {} 00:12:14.058 } 00:12:14.058 ] 00:12:14.058 04:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.058 04:49:28 -- common/autotest_common.sh@895 -- # return 0 00:12:14.058 04:49:28 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:14.058 04:49:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.058 04:49:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.058 04:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.058 04:49:28 -- bdev/blockdev.sh@482 -- # sleep 1 00:12:14.058 04:49:28 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:14.058 Running I/O for 5 seconds... 00:12:14.994 Process is existed as continue on error is set. Pid: 45971 00:12:14.994 04:49:29 -- bdev/blockdev.sh@485 -- # kill -0 45971 00:12:14.994 04:49:29 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 45971' 00:12:14.994 04:49:29 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:12:14.994 04:49:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.994 04:49:29 -- common/autotest_common.sh@10 -- # set +x 00:12:14.994 04:49:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.994 04:49:29 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:12:14.994 04:49:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.994 04:49:29 -- common/autotest_common.sh@10 -- # set +x 00:12:14.994 Timeout while waiting for response: 00:12:14.994 00:12:14.994 00:12:15.561 04:49:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.561 04:49:29 -- bdev/blockdev.sh@495 -- # sleep 5 00:12:19.819 00:12:19.819 Latency(us) 00:12:19.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.819 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:19.819 EE_Dev_1 : 0.89 129816.79 507.10 5.62 0.00 122.58 75.58 462.26 00:12:19.819 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:19.819 Dev_2 : 5.00 261412.96 1021.14 0.00 0.00 60.54 19.38 407446.67 00:12:19.819 =================================================================================================================== 00:12:19.819 Total : 391229.74 1528.24 5.62 0.00 65.58 19.38 407446.67 00:12:20.387 04:49:34 -- bdev/blockdev.sh@497 -- # killprocess 45971 00:12:20.387 04:49:34 -- common/autotest_common.sh@926 -- # '[' -z 45971 ']' 00:12:20.387 04:49:34 -- common/autotest_common.sh@930 -- # kill -0 45971 00:12:20.387 04:49:34 -- common/autotest_common.sh@931 -- # uname 00:12:20.387 04:49:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:20.387 04:49:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 45971 00:12:20.387 04:49:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:20.387 killing process with pid 45971 00:12:20.387 04:49:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:20.387 04:49:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45971' 00:12:20.387 Received shutdown signal, test time was about 5.000000 seconds 00:12:20.387 00:12:20.387 Latency(us) 00:12:20.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.387 =================================================================================================================== 00:12:20.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:20.387 04:49:34 -- common/autotest_common.sh@945 -- # kill 45971 00:12:20.387 04:49:34 -- common/autotest_common.sh@950 -- # wait 45971 00:12:22.919 Process error testing pid: 46109 00:12:22.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.919 04:49:36 -- bdev/blockdev.sh@501 -- # ERR_PID=46109 00:12:22.919 04:49:36 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 46109' 00:12:22.919 04:49:36 -- bdev/blockdev.sh@503 -- # waitforlisten 46109 00:12:22.919 04:49:36 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:12:22.919 04:49:36 -- common/autotest_common.sh@819 -- # '[' -z 46109 ']' 00:12:22.919 04:49:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.919 04:49:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:22.919 04:49:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.919 04:49:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:22.919 04:49:36 -- common/autotest_common.sh@10 -- # set +x 00:12:22.919 [2024-05-15 04:49:36.676648] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:22.919 [2024-05-15 04:49:36.676919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46109 ] 00:12:22.919 [2024-05-15 04:49:36.853983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.919 [2024-05-15 04:49:37.107817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.853 04:49:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:23.853 04:49:38 -- common/autotest_common.sh@852 -- # return 0 00:12:23.853 04:49:38 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:23.853 04:49:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.853 04:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.112 Dev_1 00:12:24.112 04:49:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.112 04:49:38 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:12:24.112 04:49:38 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:12:24.112 04:49:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:24.112 04:49:38 -- common/autotest_common.sh@889 -- # local i 00:12:24.112 04:49:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:24.112 04:49:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:24.112 04:49:38 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:24.112 04:49:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.112 04:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.112 04:49:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.112 04:49:38 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:24.112 04:49:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.112 04:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.112 [ 00:12:24.112 { 00:12:24.112 "name": "Dev_1", 00:12:24.112 "aliases": [ 00:12:24.112 "51f78226-596b-46e0-b145-ba074dc53d32" 00:12:24.112 ], 00:12:24.112 "product_name": "Malloc disk", 00:12:24.112 "block_size": 512, 00:12:24.112 "num_blocks": 262144, 00:12:24.112 "uuid": "51f78226-596b-46e0-b145-ba074dc53d32", 00:12:24.112 "assigned_rate_limits": { 00:12:24.112 "rw_ios_per_sec": 0, 00:12:24.112 "rw_mbytes_per_sec": 0, 00:12:24.112 "r_mbytes_per_sec": 0, 00:12:24.112 "w_mbytes_per_sec": 0 00:12:24.112 }, 00:12:24.112 "claimed": false, 00:12:24.112 "zoned": false, 00:12:24.112 "supported_io_types": { 00:12:24.112 "read": true, 00:12:24.112 "write": true, 00:12:24.112 "unmap": true, 00:12:24.112 "write_zeroes": true, 00:12:24.112 "flush": true, 00:12:24.112 "reset": true, 00:12:24.112 "compare": false, 00:12:24.112 "compare_and_write": false, 00:12:24.112 "abort": true, 00:12:24.112 "nvme_admin": false, 00:12:24.112 "nvme_io": false 00:12:24.112 }, 00:12:24.112 "memory_domains": [ 00:12:24.112 { 00:12:24.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.112 "dma_device_type": 2 00:12:24.112 } 00:12:24.112 ], 00:12:24.112 "driver_specific": {} 00:12:24.112 } 00:12:24.112 ] 00:12:24.112 04:49:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.112 04:49:38 -- common/autotest_common.sh@895 -- # return 0 00:12:24.112 04:49:38 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:12:24.112 04:49:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.112 04:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.112 true 00:12:24.112 04:49:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.112 04:49:38 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:24.112 04:49:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.112 04:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.371 Dev_2 00:12:24.371 04:49:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.371 04:49:38 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:12:24.371 04:49:38 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:12:24.371 04:49:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:24.371 04:49:38 -- common/autotest_common.sh@889 -- # local i 00:12:24.371 04:49:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:24.371 04:49:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:24.371 04:49:38 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:24.371 04:49:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.371 04:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.371 04:49:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.371 04:49:38 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:24.371 04:49:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.371 04:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.371 [ 00:12:24.371 { 00:12:24.371 "name": "Dev_2", 00:12:24.371 "aliases": [ 00:12:24.371 "a8961e19-3266-4791-bbca-4c0c85cd8e48" 00:12:24.371 ], 00:12:24.371 "product_name": "Malloc disk", 00:12:24.371 "block_size": 512, 00:12:24.371 "num_blocks": 262144, 00:12:24.371 "uuid": "a8961e19-3266-4791-bbca-4c0c85cd8e48", 00:12:24.371 "assigned_rate_limits": { 00:12:24.371 "rw_ios_per_sec": 0, 00:12:24.371 "rw_mbytes_per_sec": 0, 00:12:24.371 "r_mbytes_per_sec": 0, 00:12:24.371 "w_mbytes_per_sec": 0 00:12:24.371 }, 00:12:24.371 "claimed": false, 00:12:24.371 "zoned": false, 00:12:24.371 "supported_io_types": { 00:12:24.371 "read": true, 00:12:24.371 "write": true, 00:12:24.371 "unmap": true, 00:12:24.371 "write_zeroes": true, 00:12:24.371 "flush": true, 00:12:24.371 "reset": true, 00:12:24.371 "compare": false, 00:12:24.371 "compare_and_write": false, 00:12:24.371 "abort": true, 00:12:24.371 "nvme_admin": false, 00:12:24.371 "nvme_io": false 00:12:24.371 }, 00:12:24.371 "memory_domains": [ 00:12:24.371 { 00:12:24.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.371 "dma_device_type": 2 00:12:24.371 } 00:12:24.371 ], 00:12:24.371 "driver_specific": {} 00:12:24.371 } 00:12:24.371 ] 00:12:24.371 04:49:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.371 04:49:38 -- common/autotest_common.sh@895 -- # return 0 00:12:24.371 04:49:38 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:24.371 04:49:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.371 04:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.371 04:49:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.371 04:49:38 -- bdev/blockdev.sh@513 -- # NOT wait 46109 00:12:24.371 04:49:38 -- common/autotest_common.sh@640 -- # local es=0 00:12:24.371 04:49:38 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 46109 00:12:24.371 04:49:38 -- common/autotest_common.sh@628 -- # local arg=wait 00:12:24.371 04:49:38 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:24.371 04:49:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:24.371 04:49:38 -- common/autotest_common.sh@632 -- # type -t wait 00:12:24.371 04:49:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:24.371 04:49:38 -- common/autotest_common.sh@643 -- # wait 46109 00:12:24.629 Running I/O for 5 seconds... 00:12:24.629 task offset: 49064 on job bdev=EE_Dev_1 fails 00:12:24.629 00:12:24.629 Latency(us) 00:12:24.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.629 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:24.629 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:12:24.629 EE_Dev_1 : 0.00 82089.55 320.66 18656.72 0.00 123.02 56.32 228.21 00:12:24.629 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:24.629 Dev_2 : 0.00 87431.69 341.53 0.00 0.00 94.03 46.57 157.99 00:12:24.629 =================================================================================================================== 00:12:24.629 Total : 169521.25 662.19 18656.72 0.00 107.30 46.57 228.21 00:12:24.629 [2024-05-15 04:49:38.648299] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:24.629 request: 00:12:24.629 { 00:12:24.629 "method": "perform_tests", 00:12:24.629 "req_id": 1 00:12:24.629 } 00:12:24.629 Got JSON-RPC error response 00:12:24.629 response: 00:12:24.629 { 00:12:24.629 "code": -32603, 00:12:24.629 "message": "bdevperf failed with error Operation not permitted" 00:12:24.629 } 00:12:27.162 04:49:41 -- common/autotest_common.sh@643 -- # es=255 00:12:27.162 04:49:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:27.162 04:49:41 -- common/autotest_common.sh@652 -- # es=127 00:12:27.162 04:49:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:12:27.162 04:49:41 -- common/autotest_common.sh@660 -- # es=1 00:12:27.162 04:49:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:27.162 00:12:27.162 real 0m15.030s 00:12:27.162 user 0m14.886s 00:12:27.162 sys 0m1.191s 00:12:27.162 04:49:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.162 04:49:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.162 ************************************ 00:12:27.162 END TEST bdev_error 00:12:27.162 ************************************ 00:12:27.162 04:49:41 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:12:27.162 04:49:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:27.162 04:49:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:27.162 04:49:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.162 ************************************ 00:12:27.162 START TEST bdev_stat 00:12:27.162 ************************************ 00:12:27.162 Process Bdev IO statistics testing pid: 46193 00:12:27.162 04:49:41 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:12:27.162 04:49:41 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:12:27.162 04:49:41 -- bdev/blockdev.sh@594 -- # STAT_PID=46193 00:12:27.162 04:49:41 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 46193' 00:12:27.162 04:49:41 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:12:27.162 04:49:41 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:12:27.162 04:49:41 -- bdev/blockdev.sh@597 -- # waitforlisten 46193 00:12:27.162 04:49:41 -- common/autotest_common.sh@819 -- # '[' -z 46193 ']' 00:12:27.162 04:49:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.162 04:49:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:27.162 04:49:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.162 04:49:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:27.162 04:49:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.162 [2024-05-15 04:49:41.303111] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:27.162 [2024-05-15 04:49:41.303368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46193 ] 00:12:27.421 [2024-05-15 04:49:41.484557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:27.679 [2024-05-15 04:49:41.738360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.679 [2024-05-15 04:49:41.738363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.615 04:49:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:28.615 04:49:42 -- common/autotest_common.sh@852 -- # return 0 00:12:28.615 04:49:42 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:12:28.615 04:49:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.615 04:49:42 -- common/autotest_common.sh@10 -- # set +x 00:12:28.873 Malloc_STAT 00:12:28.873 04:49:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.873 04:49:42 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:12:28.873 04:49:42 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:12:28.873 04:49:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:28.873 04:49:42 -- common/autotest_common.sh@889 -- # local i 00:12:28.873 04:49:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:28.873 04:49:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:28.873 04:49:42 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:28.873 04:49:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.873 04:49:42 -- common/autotest_common.sh@10 -- # set +x 00:12:28.873 04:49:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.873 04:49:42 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:12:28.873 04:49:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.873 04:49:42 -- common/autotest_common.sh@10 -- # set +x 00:12:28.873 [ 00:12:28.873 { 00:12:28.873 "name": "Malloc_STAT", 00:12:28.873 "aliases": [ 00:12:28.873 "e7405358-9f18-46e3-9bd6-94a86cce9e68" 00:12:28.873 ], 00:12:28.873 "product_name": "Malloc disk", 00:12:28.873 "block_size": 512, 00:12:28.873 "num_blocks": 262144, 00:12:28.873 "uuid": "e7405358-9f18-46e3-9bd6-94a86cce9e68", 00:12:28.873 "assigned_rate_limits": { 00:12:28.873 "rw_ios_per_sec": 0, 00:12:28.873 "rw_mbytes_per_sec": 0, 00:12:28.873 "r_mbytes_per_sec": 0, 00:12:28.873 "w_mbytes_per_sec": 0 00:12:28.873 }, 00:12:28.873 "claimed": false, 00:12:28.873 "zoned": false, 00:12:28.873 "supported_io_types": { 00:12:28.873 "read": true, 00:12:28.874 "write": true, 00:12:28.874 "unmap": true, 00:12:28.874 "write_zeroes": true, 00:12:28.874 "flush": true, 00:12:28.874 "reset": true, 00:12:28.874 "compare": false, 00:12:28.874 "compare_and_write": false, 00:12:28.874 "abort": true, 00:12:28.874 "nvme_admin": false, 00:12:28.874 "nvme_io": false 00:12:28.874 }, 00:12:28.874 "memory_domains": [ 00:12:28.874 { 00:12:28.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.874 "dma_device_type": 2 00:12:28.874 } 00:12:28.874 ], 00:12:28.874 "driver_specific": {} 00:12:28.874 } 00:12:28.874 ] 00:12:28.874 04:49:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.874 04:49:42 -- common/autotest_common.sh@895 -- # return 0 00:12:28.874 04:49:42 -- bdev/blockdev.sh@603 -- # sleep 2 00:12:28.874 04:49:42 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:28.874 Running I/O for 10 seconds... 00:12:30.773 04:49:44 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:12:30.773 04:49:44 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:12:30.773 04:49:44 -- bdev/blockdev.sh@558 -- # local iostats 00:12:30.773 04:49:44 -- bdev/blockdev.sh@559 -- # local io_count1 00:12:30.773 04:49:44 -- bdev/blockdev.sh@560 -- # local io_count2 00:12:30.773 04:49:44 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:12:30.773 04:49:44 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:12:30.773 04:49:44 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:12:30.773 04:49:44 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:12:30.773 04:49:44 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:12:30.773 04:49:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.773 04:49:44 -- common/autotest_common.sh@10 -- # set +x 00:12:30.773 04:49:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.773 04:49:44 -- bdev/blockdev.sh@566 -- # iostats='{ 00:12:30.773 "tick_rate": 2100000000, 00:12:30.773 "ticks": 1392709150128, 00:12:30.773 "bdevs": [ 00:12:30.773 { 00:12:30.773 "name": "Malloc_STAT", 00:12:30.773 "bytes_read": 2144375296, 00:12:30.773 "num_read_ops": 523523, 00:12:30.773 "bytes_written": 0, 00:12:30.773 "num_write_ops": 0, 00:12:30.773 "bytes_unmapped": 0, 00:12:30.773 "num_unmap_ops": 0, 00:12:30.773 "bytes_copied": 0, 00:12:30.773 "num_copy_ops": 0, 00:12:30.773 "read_latency_ticks": 2033235838590, 00:12:30.773 "max_read_latency_ticks": 4479966, 00:12:30.773 "min_read_latency_ticks": 256816, 00:12:30.773 "write_latency_ticks": 0, 00:12:30.773 "max_write_latency_ticks": 0, 00:12:30.773 "min_write_latency_ticks": 0, 00:12:30.773 "unmap_latency_ticks": 0, 00:12:30.773 "max_unmap_latency_ticks": 0, 00:12:30.773 "min_unmap_latency_ticks": 0, 00:12:30.773 "copy_latency_ticks": 0, 00:12:30.773 "max_copy_latency_ticks": 0, 00:12:30.773 "min_copy_latency_ticks": 0, 00:12:30.773 "io_error": {} 00:12:30.773 } 00:12:30.773 ] 00:12:30.773 }' 00:12:30.773 04:49:44 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:12:31.031 04:49:45 -- bdev/blockdev.sh@567 -- # io_count1=523523 00:12:31.031 04:49:45 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:12:31.031 04:49:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.031 04:49:45 -- common/autotest_common.sh@10 -- # set +x 00:12:31.031 04:49:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.031 04:49:45 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:12:31.031 "tick_rate": 2100000000, 00:12:31.031 "ticks": 1392859815116, 00:12:31.031 "name": "Malloc_STAT", 00:12:31.031 "channels": [ 00:12:31.031 { 00:12:31.031 "thread_id": 2, 00:12:31.031 "bytes_read": 1101004800, 00:12:31.031 "num_read_ops": 268800, 00:12:31.031 "bytes_written": 0, 00:12:31.031 "num_write_ops": 0, 00:12:31.031 "bytes_unmapped": 0, 00:12:31.031 "num_unmap_ops": 0, 00:12:31.031 "bytes_copied": 0, 00:12:31.031 "num_copy_ops": 0, 00:12:31.031 "read_latency_ticks": 1054718083430, 00:12:31.031 "max_read_latency_ticks": 4527002, 00:12:31.031 "min_read_latency_ticks": 3234508, 00:12:31.031 "write_latency_ticks": 0, 00:12:31.031 "max_write_latency_ticks": 0, 00:12:31.031 "min_write_latency_ticks": 0, 00:12:31.031 "unmap_latency_ticks": 0, 00:12:31.031 "max_unmap_latency_ticks": 0, 00:12:31.031 "min_unmap_latency_ticks": 0, 00:12:31.031 "copy_latency_ticks": 0, 00:12:31.031 "max_copy_latency_ticks": 0, 00:12:31.031 "min_copy_latency_ticks": 0 00:12:31.031 }, 00:12:31.031 { 00:12:31.031 "thread_id": 3, 00:12:31.031 "bytes_read": 1124073472, 00:12:31.031 "num_read_ops": 274432, 00:12:31.031 "bytes_written": 0, 00:12:31.031 "num_write_ops": 0, 00:12:31.031 "bytes_unmapped": 0, 00:12:31.031 "num_unmap_ops": 0, 00:12:31.031 "bytes_copied": 0, 00:12:31.031 "num_copy_ops": 0, 00:12:31.031 "read_latency_ticks": 1055854773224, 00:12:31.031 "max_read_latency_ticks": 4276502, 00:12:31.031 "min_read_latency_ticks": 3112026, 00:12:31.031 "write_latency_ticks": 0, 00:12:31.031 "max_write_latency_ticks": 0, 00:12:31.031 "min_write_latency_ticks": 0, 00:12:31.031 "unmap_latency_ticks": 0, 00:12:31.031 "max_unmap_latency_ticks": 0, 00:12:31.031 "min_unmap_latency_ticks": 0, 00:12:31.031 "copy_latency_ticks": 0, 00:12:31.031 "max_copy_latency_ticks": 0, 00:12:31.031 "min_copy_latency_ticks": 0 00:12:31.031 } 00:12:31.031 ] 00:12:31.031 }' 00:12:31.031 04:49:45 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:12:31.031 04:49:45 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=268800 00:12:31.031 04:49:45 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=268800 00:12:31.031 04:49:45 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:12:31.032 04:49:45 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=274432 00:12:31.032 04:49:45 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=543232 00:12:31.032 04:49:45 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:12:31.032 04:49:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.032 04:49:45 -- common/autotest_common.sh@10 -- # set +x 00:12:31.032 04:49:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.032 04:49:45 -- bdev/blockdev.sh@575 -- # iostats='{ 00:12:31.032 "tick_rate": 2100000000, 00:12:31.032 "ticks": 1393128073118, 00:12:31.032 "bdevs": [ 00:12:31.032 { 00:12:31.032 "name": "Malloc_STAT", 00:12:31.032 "bytes_read": 2369819136, 00:12:31.032 "num_read_ops": 578563, 00:12:31.032 "bytes_written": 0, 00:12:31.032 "num_write_ops": 0, 00:12:31.032 "bytes_unmapped": 0, 00:12:31.032 "num_unmap_ops": 0, 00:12:31.032 "bytes_copied": 0, 00:12:31.032 "num_copy_ops": 0, 00:12:31.032 "read_latency_ticks": 2247126796888, 00:12:31.032 "max_read_latency_ticks": 4558100, 00:12:31.032 "min_read_latency_ticks": 256816, 00:12:31.032 "write_latency_ticks": 0, 00:12:31.032 "max_write_latency_ticks": 0, 00:12:31.032 "min_write_latency_ticks": 0, 00:12:31.032 "unmap_latency_ticks": 0, 00:12:31.032 "max_unmap_latency_ticks": 0, 00:12:31.032 "min_unmap_latency_ticks": 0, 00:12:31.032 "copy_latency_ticks": 0, 00:12:31.032 "max_copy_latency_ticks": 0, 00:12:31.032 "min_copy_latency_ticks": 0, 00:12:31.032 "io_error": {} 00:12:31.032 } 00:12:31.032 ] 00:12:31.032 }' 00:12:31.032 04:49:45 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:12:31.032 04:49:45 -- bdev/blockdev.sh@576 -- # io_count2=578563 00:12:31.032 04:49:45 -- bdev/blockdev.sh@581 -- # '[' 543232 -lt 523523 ']' 00:12:31.032 04:49:45 -- bdev/blockdev.sh@581 -- # '[' 543232 -gt 578563 ']' 00:12:31.032 04:49:45 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:12:31.032 04:49:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.032 04:49:45 -- common/autotest_common.sh@10 -- # set +x 00:12:31.032 00:12:31.032 Latency(us) 00:12:31.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.032 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:31.032 Malloc_STAT : 2.17 136950.65 534.96 0.00 0.00 1866.69 468.11 2184.53 00:12:31.032 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:31.032 Malloc_STAT : 2.17 139623.99 545.41 0.00 0.00 1831.04 296.47 2044.10 00:12:31.032 =================================================================================================================== 00:12:31.032 Total : 276574.65 1080.37 0.00 0.00 1848.69 296.47 2184.53 00:12:31.290 04:49:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.290 0 00:12:31.290 04:49:45 -- bdev/blockdev.sh@607 -- # killprocess 46193 00:12:31.290 04:49:45 -- common/autotest_common.sh@926 -- # '[' -z 46193 ']' 00:12:31.290 04:49:45 -- common/autotest_common.sh@930 -- # kill -0 46193 00:12:31.290 04:49:45 -- common/autotest_common.sh@931 -- # uname 00:12:31.290 04:49:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:31.290 04:49:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 46193 00:12:31.290 killing process with pid 46193 00:12:31.290 Received shutdown signal, test time was about 2.349682 seconds 00:12:31.290 00:12:31.290 Latency(us) 00:12:31.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.290 =================================================================================================================== 00:12:31.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:31.290 04:49:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:31.290 04:49:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:31.290 04:49:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46193' 00:12:31.290 04:49:45 -- common/autotest_common.sh@945 -- # kill 46193 00:12:31.290 04:49:45 -- common/autotest_common.sh@950 -- # wait 46193 00:12:33.190 ************************************ 00:12:33.190 END TEST bdev_stat 00:12:33.190 ************************************ 00:12:33.190 04:49:47 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:12:33.190 00:12:33.190 real 0m6.104s 00:12:33.190 user 0m11.319s 00:12:33.190 sys 0m0.583s 00:12:33.190 04:49:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.190 04:49:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.190 04:49:47 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:12:33.190 04:49:47 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:12:33.190 04:49:47 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:12:33.190 04:49:47 -- bdev/blockdev.sh@809 -- # cleanup 00:12:33.190 04:49:47 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:33.190 04:49:47 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:33.190 04:49:47 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:12:33.190 04:49:47 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:12:33.190 04:49:47 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:12:33.190 04:49:47 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:12:33.190 ************************************ 00:12:33.190 END TEST blockdev_general 00:12:33.190 ************************************ 00:12:33.190 00:12:33.190 real 2m13.466s 00:12:33.190 user 5m44.755s 00:12:33.190 sys 0m11.916s 00:12:33.190 04:49:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.190 04:49:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.190 04:49:47 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:33.190 04:49:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:33.190 04:49:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:33.190 04:49:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.190 ************************************ 00:12:33.190 START TEST bdev_raid 00:12:33.190 ************************************ 00:12:33.190 04:49:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:33.449 * Looking for test storage... 00:12:33.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:33.449 04:49:47 -- bdev/nbd_common.sh@6 -- # set -e 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@716 -- # uname -s 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:12:33.449 modprobe: FATAL: Module nbd not found. 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:12:33.449 04:49:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:33.449 04:49:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:33.449 04:49:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.449 ************************************ 00:12:33.449 START TEST raid0_resize_test 00:12:33.449 ************************************ 00:12:33.449 04:49:47 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@301 -- # raid_pid=46374 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 46374' 00:12:33.449 Process raid pid: 46374 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@303 -- # waitforlisten 46374 /var/tmp/spdk-raid.sock 00:12:33.449 04:49:47 -- common/autotest_common.sh@819 -- # '[' -z 46374 ']' 00:12:33.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:33.449 04:49:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:33.449 04:49:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:33.449 04:49:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:33.449 04:49:47 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:33.449 04:49:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:33.449 04:49:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.449 [2024-05-15 04:49:47.632489] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:33.449 [2024-05-15 04:49:47.632645] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.708 [2024-05-15 04:49:47.818863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.967 [2024-05-15 04:49:48.047343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.225 [2024-05-15 04:49:48.312446] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.161 04:49:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:35.161 04:49:49 -- common/autotest_common.sh@852 -- # return 0 00:12:35.161 04:49:49 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:12:35.161 Base_1 00:12:35.161 04:49:49 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:12:35.161 Base_2 00:12:35.161 04:49:49 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:12:35.419 [2024-05-15 04:49:49.503986] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:35.419 [2024-05-15 04:49:49.505796] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:35.419 [2024-05-15 04:49:49.505874] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027380 00:12:35.419 [2024-05-15 04:49:49.505890] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:35.419 [2024-05-15 04:49:49.506068] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:12:35.420 [2024-05-15 04:49:49.506369] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027380 00:12:35.420 [2024-05-15 04:49:49.506385] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000027380 00:12:35.420 [2024-05-15 04:49:49.506567] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.420 04:49:49 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:12:35.683 [2024-05-15 04:49:49.651930] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:35.684 [2024-05-15 04:49:49.651956] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:35.684 true 00:12:35.684 04:49:49 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:35.684 04:49:49 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:12:35.684 [2024-05-15 04:49:49.803999] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.684 04:49:49 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:12:35.684 04:49:49 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:12:35.684 04:49:49 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:12:35.684 04:49:49 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:12:35.981 [2024-05-15 04:49:49.959905] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:35.981 [2024-05-15 04:49:49.959929] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:35.981 [2024-05-15 04:49:49.959970] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:12:35.981 [2024-05-15 04:49:49.960035] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:35.981 true 00:12:35.981 04:49:49 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:12:35.981 04:49:49 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:35.981 [2024-05-15 04:49:50.112035] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.981 04:49:50 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:12:35.981 04:49:50 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:12:35.981 04:49:50 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:12:35.981 04:49:50 -- bdev/bdev_raid.sh@332 -- # killprocess 46374 00:12:35.981 04:49:50 -- common/autotest_common.sh@926 -- # '[' -z 46374 ']' 00:12:35.981 04:49:50 -- common/autotest_common.sh@930 -- # kill -0 46374 00:12:35.981 04:49:50 -- common/autotest_common.sh@931 -- # uname 00:12:35.981 04:49:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:35.981 04:49:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 46374 00:12:35.981 killing process with pid 46374 00:12:35.981 04:49:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:35.981 04:49:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:35.981 04:49:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46374' 00:12:35.981 04:49:50 -- common/autotest_common.sh@945 -- # kill 46374 00:12:35.981 04:49:50 -- common/autotest_common.sh@950 -- # wait 46374 00:12:35.981 [2024-05-15 04:49:50.156262] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.981 [2024-05-15 04:49:50.156352] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.981 [2024-05-15 04:49:50.156385] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.981 [2024-05-15 04:49:50.156394] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Raid, state offline 00:12:35.981 [2024-05-15 04:49:50.156847] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.368 04:49:51 -- bdev/bdev_raid.sh@334 -- # return 0 00:12:37.368 ************************************ 00:12:37.368 END TEST raid0_resize_test 00:12:37.368 00:12:37.368 real 0m4.134s 00:12:37.368 user 0m5.025s 00:12:37.368 sys 0m0.594s 00:12:37.368 04:49:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.368 04:49:51 -- common/autotest_common.sh@10 -- # set +x 00:12:37.368 ************************************ 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:12:37.627 04:49:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:37.627 04:49:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:37.627 04:49:51 -- common/autotest_common.sh@10 -- # set +x 00:12:37.627 ************************************ 00:12:37.627 START TEST raid_state_function_test 00:12:37.627 ************************************ 00:12:37.627 04:49:51 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:12:37.627 Process raid pid: 46471 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=46471 00:12:37.627 04:49:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 46471' 00:12:37.628 04:49:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 46471 /var/tmp/spdk-raid.sock 00:12:37.628 04:49:51 -- common/autotest_common.sh@819 -- # '[' -z 46471 ']' 00:12:37.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:37.628 04:49:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:37.628 04:49:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:37.628 04:49:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:37.628 04:49:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:37.628 04:49:51 -- common/autotest_common.sh@10 -- # set +x 00:12:37.628 04:49:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:37.628 [2024-05-15 04:49:51.815184] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:37.628 [2024-05-15 04:49:51.815408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.886 [2024-05-15 04:49:51.986710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.145 [2024-05-15 04:49:52.225285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.404 [2024-05-15 04:49:52.485386] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.341 04:49:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:39.341 04:49:53 -- common/autotest_common.sh@852 -- # return 0 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:39.341 [2024-05-15 04:49:53.463957] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:39.341 [2024-05-15 04:49:53.464024] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:39.341 [2024-05-15 04:49:53.464035] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:39.341 [2024-05-15 04:49:53.464053] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.341 04:49:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.600 04:49:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:39.600 "name": "Existed_Raid", 00:12:39.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.600 "strip_size_kb": 64, 00:12:39.600 "state": "configuring", 00:12:39.600 "raid_level": "raid0", 00:12:39.600 "superblock": false, 00:12:39.600 "num_base_bdevs": 2, 00:12:39.600 "num_base_bdevs_discovered": 0, 00:12:39.600 "num_base_bdevs_operational": 2, 00:12:39.600 "base_bdevs_list": [ 00:12:39.600 { 00:12:39.600 "name": "BaseBdev1", 00:12:39.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.600 "is_configured": false, 00:12:39.600 "data_offset": 0, 00:12:39.600 "data_size": 0 00:12:39.600 }, 00:12:39.600 { 00:12:39.600 "name": "BaseBdev2", 00:12:39.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.600 "is_configured": false, 00:12:39.600 "data_offset": 0, 00:12:39.600 "data_size": 0 00:12:39.600 } 00:12:39.600 ] 00:12:39.600 }' 00:12:39.600 04:49:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:39.600 04:49:53 -- common/autotest_common.sh@10 -- # set +x 00:12:40.167 04:49:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:40.167 [2024-05-15 04:49:54.288038] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.167 [2024-05-15 04:49:54.288078] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:12:40.167 04:49:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:40.426 [2024-05-15 04:49:54.432042] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:40.426 [2024-05-15 04:49:54.432118] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:40.426 [2024-05-15 04:49:54.432130] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.426 [2024-05-15 04:49:54.432155] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.426 04:49:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:40.426 BaseBdev1 00:12:40.426 [2024-05-15 04:49:54.617825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.426 04:49:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:40.426 04:49:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:40.426 04:49:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:40.426 04:49:54 -- common/autotest_common.sh@889 -- # local i 00:12:40.426 04:49:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:40.426 04:49:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:40.426 04:49:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:40.686 04:49:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:40.686 [ 00:12:40.686 { 00:12:40.686 "name": "BaseBdev1", 00:12:40.686 "aliases": [ 00:12:40.686 "8cf7dc5d-73ec-430f-82c4-107965548d03" 00:12:40.686 ], 00:12:40.686 "product_name": "Malloc disk", 00:12:40.686 "block_size": 512, 00:12:40.686 "num_blocks": 65536, 00:12:40.686 "uuid": "8cf7dc5d-73ec-430f-82c4-107965548d03", 00:12:40.686 "assigned_rate_limits": { 00:12:40.686 "rw_ios_per_sec": 0, 00:12:40.686 "rw_mbytes_per_sec": 0, 00:12:40.686 "r_mbytes_per_sec": 0, 00:12:40.686 "w_mbytes_per_sec": 0 00:12:40.686 }, 00:12:40.686 "claimed": true, 00:12:40.686 "claim_type": "exclusive_write", 00:12:40.686 "zoned": false, 00:12:40.686 "supported_io_types": { 00:12:40.686 "read": true, 00:12:40.686 "write": true, 00:12:40.686 "unmap": true, 00:12:40.686 "write_zeroes": true, 00:12:40.686 "flush": true, 00:12:40.686 "reset": true, 00:12:40.686 "compare": false, 00:12:40.686 "compare_and_write": false, 00:12:40.686 "abort": true, 00:12:40.686 "nvme_admin": false, 00:12:40.686 "nvme_io": false 00:12:40.686 }, 00:12:40.686 "memory_domains": [ 00:12:40.686 { 00:12:40.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.686 "dma_device_type": 2 00:12:40.686 } 00:12:40.686 ], 00:12:40.686 "driver_specific": {} 00:12:40.686 } 00:12:40.686 ] 00:12:40.686 04:49:54 -- common/autotest_common.sh@895 -- # return 0 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.686 04:49:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.946 04:49:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:40.946 "name": "Existed_Raid", 00:12:40.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.946 "strip_size_kb": 64, 00:12:40.946 "state": "configuring", 00:12:40.946 "raid_level": "raid0", 00:12:40.946 "superblock": false, 00:12:40.946 "num_base_bdevs": 2, 00:12:40.946 "num_base_bdevs_discovered": 1, 00:12:40.946 "num_base_bdevs_operational": 2, 00:12:40.946 "base_bdevs_list": [ 00:12:40.946 { 00:12:40.946 "name": "BaseBdev1", 00:12:40.946 "uuid": "8cf7dc5d-73ec-430f-82c4-107965548d03", 00:12:40.946 "is_configured": true, 00:12:40.946 "data_offset": 0, 00:12:40.946 "data_size": 65536 00:12:40.946 }, 00:12:40.946 { 00:12:40.946 "name": "BaseBdev2", 00:12:40.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.946 "is_configured": false, 00:12:40.946 "data_offset": 0, 00:12:40.946 "data_size": 0 00:12:40.946 } 00:12:40.946 ] 00:12:40.946 }' 00:12:40.946 04:49:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:40.946 04:49:55 -- common/autotest_common.sh@10 -- # set +x 00:12:41.514 04:49:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:41.773 [2024-05-15 04:49:55.901960] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:41.773 [2024-05-15 04:49:55.902016] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:12:41.773 04:49:55 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:12:41.773 04:49:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:42.033 [2024-05-15 04:49:56.050051] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.033 [2024-05-15 04:49:56.051465] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.033 [2024-05-15 04:49:56.051518] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.033 04:49:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.291 04:49:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:42.291 "name": "Existed_Raid", 00:12:42.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.291 "strip_size_kb": 64, 00:12:42.291 "state": "configuring", 00:12:42.291 "raid_level": "raid0", 00:12:42.291 "superblock": false, 00:12:42.291 "num_base_bdevs": 2, 00:12:42.291 "num_base_bdevs_discovered": 1, 00:12:42.291 "num_base_bdevs_operational": 2, 00:12:42.291 "base_bdevs_list": [ 00:12:42.291 { 00:12:42.291 "name": "BaseBdev1", 00:12:42.291 "uuid": "8cf7dc5d-73ec-430f-82c4-107965548d03", 00:12:42.291 "is_configured": true, 00:12:42.291 "data_offset": 0, 00:12:42.291 "data_size": 65536 00:12:42.291 }, 00:12:42.291 { 00:12:42.291 "name": "BaseBdev2", 00:12:42.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.291 "is_configured": false, 00:12:42.291 "data_offset": 0, 00:12:42.291 "data_size": 0 00:12:42.291 } 00:12:42.291 ] 00:12:42.291 }' 00:12:42.291 04:49:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:42.291 04:49:56 -- common/autotest_common.sh@10 -- # set +x 00:12:42.857 04:49:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:42.857 [2024-05-15 04:49:57.060897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.857 [2024-05-15 04:49:57.060944] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027f80 00:12:42.857 [2024-05-15 04:49:57.060952] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:42.857 [2024-05-15 04:49:57.061050] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:12:42.857 [2024-05-15 04:49:57.061272] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027f80 00:12:42.857 [2024-05-15 04:49:57.061281] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027f80 00:12:42.857 [2024-05-15 04:49:57.061497] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.857 BaseBdev2 00:12:42.857 04:49:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:42.857 04:49:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:42.857 04:49:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:42.857 04:49:57 -- common/autotest_common.sh@889 -- # local i 00:12:42.857 04:49:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:42.857 04:49:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:42.857 04:49:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:43.115 04:49:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.373 [ 00:12:43.373 { 00:12:43.373 "name": "BaseBdev2", 00:12:43.373 "aliases": [ 00:12:43.373 "b747aa20-24b4-4fc5-80df-8b58dce872e3" 00:12:43.373 ], 00:12:43.373 "product_name": "Malloc disk", 00:12:43.373 "block_size": 512, 00:12:43.373 "num_blocks": 65536, 00:12:43.373 "uuid": "b747aa20-24b4-4fc5-80df-8b58dce872e3", 00:12:43.373 "assigned_rate_limits": { 00:12:43.373 "rw_ios_per_sec": 0, 00:12:43.373 "rw_mbytes_per_sec": 0, 00:12:43.373 "r_mbytes_per_sec": 0, 00:12:43.373 "w_mbytes_per_sec": 0 00:12:43.373 }, 00:12:43.373 "claimed": true, 00:12:43.373 "claim_type": "exclusive_write", 00:12:43.373 "zoned": false, 00:12:43.373 "supported_io_types": { 00:12:43.373 "read": true, 00:12:43.373 "write": true, 00:12:43.373 "unmap": true, 00:12:43.373 "write_zeroes": true, 00:12:43.373 "flush": true, 00:12:43.373 "reset": true, 00:12:43.373 "compare": false, 00:12:43.373 "compare_and_write": false, 00:12:43.373 "abort": true, 00:12:43.373 "nvme_admin": false, 00:12:43.373 "nvme_io": false 00:12:43.373 }, 00:12:43.373 "memory_domains": [ 00:12:43.373 { 00:12:43.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.373 "dma_device_type": 2 00:12:43.373 } 00:12:43.373 ], 00:12:43.373 "driver_specific": {} 00:12:43.373 } 00:12:43.373 ] 00:12:43.373 04:49:57 -- common/autotest_common.sh@895 -- # return 0 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.373 04:49:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.631 04:49:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:43.631 "name": "Existed_Raid", 00:12:43.631 "uuid": "c11f2185-295b-45b5-a16f-a5212c1e2b08", 00:12:43.631 "strip_size_kb": 64, 00:12:43.631 "state": "online", 00:12:43.631 "raid_level": "raid0", 00:12:43.631 "superblock": false, 00:12:43.631 "num_base_bdevs": 2, 00:12:43.631 "num_base_bdevs_discovered": 2, 00:12:43.631 "num_base_bdevs_operational": 2, 00:12:43.631 "base_bdevs_list": [ 00:12:43.631 { 00:12:43.631 "name": "BaseBdev1", 00:12:43.631 "uuid": "8cf7dc5d-73ec-430f-82c4-107965548d03", 00:12:43.631 "is_configured": true, 00:12:43.631 "data_offset": 0, 00:12:43.631 "data_size": 65536 00:12:43.631 }, 00:12:43.631 { 00:12:43.631 "name": "BaseBdev2", 00:12:43.631 "uuid": "b747aa20-24b4-4fc5-80df-8b58dce872e3", 00:12:43.631 "is_configured": true, 00:12:43.631 "data_offset": 0, 00:12:43.631 "data_size": 65536 00:12:43.631 } 00:12:43.631 ] 00:12:43.631 }' 00:12:43.631 04:49:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:43.631 04:49:57 -- common/autotest_common.sh@10 -- # set +x 00:12:44.195 04:49:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:44.195 [2024-05-15 04:49:58.425092] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:44.195 [2024-05-15 04:49:58.425123] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.195 [2024-05-15 04:49:58.425188] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.453 04:49:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.711 04:49:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:44.711 "name": "Existed_Raid", 00:12:44.711 "uuid": "c11f2185-295b-45b5-a16f-a5212c1e2b08", 00:12:44.711 "strip_size_kb": 64, 00:12:44.711 "state": "offline", 00:12:44.711 "raid_level": "raid0", 00:12:44.711 "superblock": false, 00:12:44.711 "num_base_bdevs": 2, 00:12:44.711 "num_base_bdevs_discovered": 1, 00:12:44.711 "num_base_bdevs_operational": 1, 00:12:44.711 "base_bdevs_list": [ 00:12:44.711 { 00:12:44.711 "name": null, 00:12:44.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.711 "is_configured": false, 00:12:44.711 "data_offset": 0, 00:12:44.711 "data_size": 65536 00:12:44.711 }, 00:12:44.711 { 00:12:44.711 "name": "BaseBdev2", 00:12:44.711 "uuid": "b747aa20-24b4-4fc5-80df-8b58dce872e3", 00:12:44.711 "is_configured": true, 00:12:44.711 "data_offset": 0, 00:12:44.711 "data_size": 65536 00:12:44.711 } 00:12:44.711 ] 00:12:44.711 }' 00:12:44.711 04:49:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:44.711 04:49:58 -- common/autotest_common.sh@10 -- # set +x 00:12:45.277 04:49:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:45.277 04:49:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:45.277 04:49:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:45.277 04:49:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.277 04:49:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:45.277 04:49:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.277 04:49:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:45.535 [2024-05-15 04:49:59.656499] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:45.535 [2024-05-15 04:49:59.656557] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027f80 name Existed_Raid, state offline 00:12:45.793 04:49:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:45.794 04:49:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:45.794 04:49:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.794 04:49:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:45.794 04:49:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:45.794 04:49:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:45.794 04:49:59 -- bdev/bdev_raid.sh@287 -- # killprocess 46471 00:12:45.794 04:49:59 -- common/autotest_common.sh@926 -- # '[' -z 46471 ']' 00:12:45.794 04:49:59 -- common/autotest_common.sh@930 -- # kill -0 46471 00:12:45.794 04:49:59 -- common/autotest_common.sh@931 -- # uname 00:12:45.794 04:49:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:45.794 04:49:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 46471 00:12:45.794 killing process with pid 46471 00:12:45.794 04:50:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:45.794 04:50:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:45.794 04:50:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46471' 00:12:45.794 04:50:00 -- common/autotest_common.sh@945 -- # kill 46471 00:12:45.794 [2024-05-15 04:50:00.007961] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.794 04:50:00 -- common/autotest_common.sh@950 -- # wait 46471 00:12:45.794 [2024-05-15 04:50:00.008101] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.694 ************************************ 00:12:47.694 END TEST raid_state_function_test 00:12:47.694 ************************************ 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:47.694 00:12:47.694 real 0m9.787s 00:12:47.694 user 0m15.902s 00:12:47.694 sys 0m1.245s 00:12:47.694 04:50:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.694 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:12:47.694 04:50:01 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:47.694 04:50:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:47.694 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:12:47.694 ************************************ 00:12:47.694 START TEST raid_state_function_test_sb 00:12:47.694 ************************************ 00:12:47.694 04:50:01 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:12:47.694 Process raid pid: 46790 00:12:47.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@226 -- # raid_pid=46790 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 46790' 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@228 -- # waitforlisten 46790 /var/tmp/spdk-raid.sock 00:12:47.694 04:50:01 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:47.694 04:50:01 -- common/autotest_common.sh@819 -- # '[' -z 46790 ']' 00:12:47.694 04:50:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:47.694 04:50:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:47.694 04:50:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:47.694 04:50:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:47.694 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:12:47.694 [2024-05-15 04:50:01.652342] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:47.694 [2024-05-15 04:50:01.652565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.694 [2024-05-15 04:50:01.839244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.952 [2024-05-15 04:50:02.127243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.210 [2024-05-15 04:50:02.398946] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.150 04:50:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:49.150 04:50:03 -- common/autotest_common.sh@852 -- # return 0 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:49.150 [2024-05-15 04:50:03.228887] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.150 [2024-05-15 04:50:03.228976] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.150 [2024-05-15 04:50:03.228993] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.150 [2024-05-15 04:50:03.229020] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.150 04:50:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.411 04:50:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:49.411 "name": "Existed_Raid", 00:12:49.411 "uuid": "ff3ebc02-4499-4922-8d8e-3097c13db3e9", 00:12:49.411 "strip_size_kb": 64, 00:12:49.411 "state": "configuring", 00:12:49.411 "raid_level": "raid0", 00:12:49.411 "superblock": true, 00:12:49.411 "num_base_bdevs": 2, 00:12:49.411 "num_base_bdevs_discovered": 0, 00:12:49.411 "num_base_bdevs_operational": 2, 00:12:49.411 "base_bdevs_list": [ 00:12:49.411 { 00:12:49.411 "name": "BaseBdev1", 00:12:49.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.411 "is_configured": false, 00:12:49.411 "data_offset": 0, 00:12:49.411 "data_size": 0 00:12:49.411 }, 00:12:49.411 { 00:12:49.411 "name": "BaseBdev2", 00:12:49.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.411 "is_configured": false, 00:12:49.411 "data_offset": 0, 00:12:49.411 "data_size": 0 00:12:49.411 } 00:12:49.411 ] 00:12:49.411 }' 00:12:49.411 04:50:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:49.411 04:50:03 -- common/autotest_common.sh@10 -- # set +x 00:12:49.667 04:50:03 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:49.924 [2024-05-15 04:50:03.992678] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.924 [2024-05-15 04:50:03.992955] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:12:49.924 04:50:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:49.924 [2024-05-15 04:50:04.136800] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.924 [2024-05-15 04:50:04.136871] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.924 [2024-05-15 04:50:04.136882] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.924 [2024-05-15 04:50:04.136907] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.924 04:50:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:50.180 [2024-05-15 04:50:04.358571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.181 BaseBdev1 00:12:50.181 04:50:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:50.181 04:50:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:50.181 04:50:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:50.181 04:50:04 -- common/autotest_common.sh@889 -- # local i 00:12:50.181 04:50:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:50.181 04:50:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:50.181 04:50:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:50.438 04:50:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:50.438 [ 00:12:50.438 { 00:12:50.438 "name": "BaseBdev1", 00:12:50.438 "aliases": [ 00:12:50.438 "951694dc-d4ce-47b4-a3bc-0b9e5a023107" 00:12:50.438 ], 00:12:50.438 "product_name": "Malloc disk", 00:12:50.438 "block_size": 512, 00:12:50.438 "num_blocks": 65536, 00:12:50.438 "uuid": "951694dc-d4ce-47b4-a3bc-0b9e5a023107", 00:12:50.438 "assigned_rate_limits": { 00:12:50.438 "rw_ios_per_sec": 0, 00:12:50.438 "rw_mbytes_per_sec": 0, 00:12:50.438 "r_mbytes_per_sec": 0, 00:12:50.438 "w_mbytes_per_sec": 0 00:12:50.438 }, 00:12:50.438 "claimed": true, 00:12:50.438 "claim_type": "exclusive_write", 00:12:50.438 "zoned": false, 00:12:50.438 "supported_io_types": { 00:12:50.438 "read": true, 00:12:50.438 "write": true, 00:12:50.438 "unmap": true, 00:12:50.438 "write_zeroes": true, 00:12:50.438 "flush": true, 00:12:50.438 "reset": true, 00:12:50.438 "compare": false, 00:12:50.438 "compare_and_write": false, 00:12:50.438 "abort": true, 00:12:50.438 "nvme_admin": false, 00:12:50.438 "nvme_io": false 00:12:50.438 }, 00:12:50.438 "memory_domains": [ 00:12:50.438 { 00:12:50.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.438 "dma_device_type": 2 00:12:50.438 } 00:12:50.438 ], 00:12:50.438 "driver_specific": {} 00:12:50.438 } 00:12:50.438 ] 00:12:50.438 04:50:04 -- common/autotest_common.sh@895 -- # return 0 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.438 04:50:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.695 04:50:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:50.695 "name": "Existed_Raid", 00:12:50.695 "uuid": "93bb3d91-363f-4322-864d-890e36f61038", 00:12:50.695 "strip_size_kb": 64, 00:12:50.695 "state": "configuring", 00:12:50.695 "raid_level": "raid0", 00:12:50.695 "superblock": true, 00:12:50.695 "num_base_bdevs": 2, 00:12:50.695 "num_base_bdevs_discovered": 1, 00:12:50.695 "num_base_bdevs_operational": 2, 00:12:50.695 "base_bdevs_list": [ 00:12:50.695 { 00:12:50.695 "name": "BaseBdev1", 00:12:50.695 "uuid": "951694dc-d4ce-47b4-a3bc-0b9e5a023107", 00:12:50.695 "is_configured": true, 00:12:50.695 "data_offset": 2048, 00:12:50.695 "data_size": 63488 00:12:50.695 }, 00:12:50.695 { 00:12:50.695 "name": "BaseBdev2", 00:12:50.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.695 "is_configured": false, 00:12:50.695 "data_offset": 0, 00:12:50.695 "data_size": 0 00:12:50.695 } 00:12:50.695 ] 00:12:50.695 }' 00:12:50.695 04:50:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:50.695 04:50:04 -- common/autotest_common.sh@10 -- # set +x 00:12:51.304 04:50:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:51.590 [2024-05-15 04:50:05.646738] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.590 [2024-05-15 04:50:05.646786] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:12:51.590 04:50:05 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:12:51.590 04:50:05 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:51.848 04:50:05 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:51.848 BaseBdev1 00:12:51.848 04:50:06 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:12:51.848 04:50:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:51.848 04:50:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:51.848 04:50:06 -- common/autotest_common.sh@889 -- # local i 00:12:51.848 04:50:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:51.848 04:50:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:51.848 04:50:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:52.107 04:50:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.365 [ 00:12:52.365 { 00:12:52.365 "name": "BaseBdev1", 00:12:52.365 "aliases": [ 00:12:52.365 "4fd57aa4-33d4-4f22-b277-90f09fcba84c" 00:12:52.365 ], 00:12:52.365 "product_name": "Malloc disk", 00:12:52.365 "block_size": 512, 00:12:52.365 "num_blocks": 65536, 00:12:52.365 "uuid": "4fd57aa4-33d4-4f22-b277-90f09fcba84c", 00:12:52.365 "assigned_rate_limits": { 00:12:52.365 "rw_ios_per_sec": 0, 00:12:52.365 "rw_mbytes_per_sec": 0, 00:12:52.365 "r_mbytes_per_sec": 0, 00:12:52.365 "w_mbytes_per_sec": 0 00:12:52.365 }, 00:12:52.365 "claimed": false, 00:12:52.365 "zoned": false, 00:12:52.365 "supported_io_types": { 00:12:52.365 "read": true, 00:12:52.365 "write": true, 00:12:52.365 "unmap": true, 00:12:52.365 "write_zeroes": true, 00:12:52.365 "flush": true, 00:12:52.365 "reset": true, 00:12:52.365 "compare": false, 00:12:52.365 "compare_and_write": false, 00:12:52.365 "abort": true, 00:12:52.365 "nvme_admin": false, 00:12:52.365 "nvme_io": false 00:12:52.365 }, 00:12:52.365 "memory_domains": [ 00:12:52.365 { 00:12:52.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.365 "dma_device_type": 2 00:12:52.365 } 00:12:52.365 ], 00:12:52.365 "driver_specific": {} 00:12:52.365 } 00:12:52.365 ] 00:12:52.365 04:50:06 -- common/autotest_common.sh@895 -- # return 0 00:12:52.365 04:50:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:52.623 [2024-05-15 04:50:06.621803] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.623 [2024-05-15 04:50:06.623958] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.623 [2024-05-15 04:50:06.624039] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.623 04:50:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.881 04:50:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:52.881 "name": "Existed_Raid", 00:12:52.881 "uuid": "83345bb2-fdaf-4fc1-b9ae-1aacdaf686f7", 00:12:52.881 "strip_size_kb": 64, 00:12:52.881 "state": "configuring", 00:12:52.881 "raid_level": "raid0", 00:12:52.881 "superblock": true, 00:12:52.881 "num_base_bdevs": 2, 00:12:52.881 "num_base_bdevs_discovered": 1, 00:12:52.881 "num_base_bdevs_operational": 2, 00:12:52.881 "base_bdevs_list": [ 00:12:52.881 { 00:12:52.881 "name": "BaseBdev1", 00:12:52.881 "uuid": "4fd57aa4-33d4-4f22-b277-90f09fcba84c", 00:12:52.881 "is_configured": true, 00:12:52.881 "data_offset": 2048, 00:12:52.881 "data_size": 63488 00:12:52.881 }, 00:12:52.881 { 00:12:52.881 "name": "BaseBdev2", 00:12:52.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.881 "is_configured": false, 00:12:52.881 "data_offset": 0, 00:12:52.881 "data_size": 0 00:12:52.881 } 00:12:52.881 ] 00:12:52.881 }' 00:12:52.881 04:50:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:52.881 04:50:06 -- common/autotest_common.sh@10 -- # set +x 00:12:53.447 04:50:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:53.447 BaseBdev2 00:12:53.447 [2024-05-15 04:50:07.677864] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.447 [2024-05-15 04:50:07.678051] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:12:53.447 [2024-05-15 04:50:07.678064] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:53.447 [2024-05-15 04:50:07.678146] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:12:53.447 [2024-05-15 04:50:07.678341] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:12:53.448 [2024-05-15 04:50:07.678351] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:12:53.448 [2024-05-15 04:50:07.678455] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.706 04:50:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:53.706 04:50:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:53.706 04:50:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:53.706 04:50:07 -- common/autotest_common.sh@889 -- # local i 00:12:53.706 04:50:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:53.706 04:50:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:53.706 04:50:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:53.706 04:50:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:53.965 [ 00:12:53.965 { 00:12:53.965 "name": "BaseBdev2", 00:12:53.965 "aliases": [ 00:12:53.965 "50912a8a-c17a-4145-a30b-cdf45899b2c3" 00:12:53.965 ], 00:12:53.965 "product_name": "Malloc disk", 00:12:53.965 "block_size": 512, 00:12:53.965 "num_blocks": 65536, 00:12:53.965 "uuid": "50912a8a-c17a-4145-a30b-cdf45899b2c3", 00:12:53.965 "assigned_rate_limits": { 00:12:53.965 "rw_ios_per_sec": 0, 00:12:53.965 "rw_mbytes_per_sec": 0, 00:12:53.965 "r_mbytes_per_sec": 0, 00:12:53.965 "w_mbytes_per_sec": 0 00:12:53.965 }, 00:12:53.965 "claimed": true, 00:12:53.965 "claim_type": "exclusive_write", 00:12:53.965 "zoned": false, 00:12:53.965 "supported_io_types": { 00:12:53.965 "read": true, 00:12:53.965 "write": true, 00:12:53.965 "unmap": true, 00:12:53.965 "write_zeroes": true, 00:12:53.965 "flush": true, 00:12:53.965 "reset": true, 00:12:53.965 "compare": false, 00:12:53.965 "compare_and_write": false, 00:12:53.965 "abort": true, 00:12:53.965 "nvme_admin": false, 00:12:53.965 "nvme_io": false 00:12:53.965 }, 00:12:53.965 "memory_domains": [ 00:12:53.965 { 00:12:53.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.965 "dma_device_type": 2 00:12:53.965 } 00:12:53.965 ], 00:12:53.965 "driver_specific": {} 00:12:53.965 } 00:12:53.965 ] 00:12:53.965 04:50:08 -- common/autotest_common.sh@895 -- # return 0 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.965 04:50:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.223 04:50:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:54.223 "name": "Existed_Raid", 00:12:54.223 "uuid": "83345bb2-fdaf-4fc1-b9ae-1aacdaf686f7", 00:12:54.223 "strip_size_kb": 64, 00:12:54.223 "state": "online", 00:12:54.223 "raid_level": "raid0", 00:12:54.223 "superblock": true, 00:12:54.223 "num_base_bdevs": 2, 00:12:54.223 "num_base_bdevs_discovered": 2, 00:12:54.223 "num_base_bdevs_operational": 2, 00:12:54.223 "base_bdevs_list": [ 00:12:54.223 { 00:12:54.223 "name": "BaseBdev1", 00:12:54.223 "uuid": "4fd57aa4-33d4-4f22-b277-90f09fcba84c", 00:12:54.223 "is_configured": true, 00:12:54.223 "data_offset": 2048, 00:12:54.223 "data_size": 63488 00:12:54.223 }, 00:12:54.223 { 00:12:54.223 "name": "BaseBdev2", 00:12:54.223 "uuid": "50912a8a-c17a-4145-a30b-cdf45899b2c3", 00:12:54.223 "is_configured": true, 00:12:54.223 "data_offset": 2048, 00:12:54.223 "data_size": 63488 00:12:54.223 } 00:12:54.223 ] 00:12:54.223 }' 00:12:54.223 04:50:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:54.223 04:50:08 -- common/autotest_common.sh@10 -- # set +x 00:12:54.789 04:50:08 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:55.047 [2024-05-15 04:50:09.046097] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.047 [2024-05-15 04:50:09.046127] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.047 [2024-05-15 04:50:09.046170] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.047 04:50:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.305 04:50:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:55.305 "name": "Existed_Raid", 00:12:55.305 "uuid": "83345bb2-fdaf-4fc1-b9ae-1aacdaf686f7", 00:12:55.305 "strip_size_kb": 64, 00:12:55.305 "state": "offline", 00:12:55.305 "raid_level": "raid0", 00:12:55.305 "superblock": true, 00:12:55.305 "num_base_bdevs": 2, 00:12:55.305 "num_base_bdevs_discovered": 1, 00:12:55.305 "num_base_bdevs_operational": 1, 00:12:55.305 "base_bdevs_list": [ 00:12:55.305 { 00:12:55.305 "name": null, 00:12:55.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.305 "is_configured": false, 00:12:55.305 "data_offset": 2048, 00:12:55.305 "data_size": 63488 00:12:55.305 }, 00:12:55.305 { 00:12:55.305 "name": "BaseBdev2", 00:12:55.305 "uuid": "50912a8a-c17a-4145-a30b-cdf45899b2c3", 00:12:55.305 "is_configured": true, 00:12:55.305 "data_offset": 2048, 00:12:55.305 "data_size": 63488 00:12:55.305 } 00:12:55.305 ] 00:12:55.305 }' 00:12:55.305 04:50:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:55.305 04:50:09 -- common/autotest_common.sh@10 -- # set +x 00:12:55.870 04:50:09 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:55.870 04:50:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:55.870 04:50:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:55.870 04:50:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.870 04:50:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:55.870 04:50:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.870 04:50:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:56.129 [2024-05-15 04:50:10.168031] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:56.129 [2024-05-15 04:50:10.168092] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:12:56.129 04:50:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:56.129 04:50:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:56.129 04:50:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:56.129 04:50:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.387 04:50:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:56.387 04:50:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:56.387 04:50:10 -- bdev/bdev_raid.sh@287 -- # killprocess 46790 00:12:56.387 04:50:10 -- common/autotest_common.sh@926 -- # '[' -z 46790 ']' 00:12:56.387 04:50:10 -- common/autotest_common.sh@930 -- # kill -0 46790 00:12:56.387 04:50:10 -- common/autotest_common.sh@931 -- # uname 00:12:56.387 04:50:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:56.387 04:50:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 46790 00:12:56.387 killing process with pid 46790 00:12:56.387 04:50:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:56.387 04:50:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:56.387 04:50:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46790' 00:12:56.387 04:50:10 -- common/autotest_common.sh@945 -- # kill 46790 00:12:56.387 04:50:10 -- common/autotest_common.sh@950 -- # wait 46790 00:12:56.387 [2024-05-15 04:50:10.528936] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.387 [2024-05-15 04:50:10.529045] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.760 ************************************ 00:12:57.760 END TEST raid_state_function_test_sb 00:12:57.760 ************************************ 00:12:57.760 04:50:11 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:57.760 00:12:57.760 real 0m10.454s 00:12:57.760 user 0m16.956s 00:12:57.760 sys 0m1.379s 00:12:57.760 04:50:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.760 04:50:11 -- common/autotest_common.sh@10 -- # set +x 00:12:58.018 04:50:11 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:12:58.018 04:50:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:58.018 04:50:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:58.018 04:50:11 -- common/autotest_common.sh@10 -- # set +x 00:12:58.018 ************************************ 00:12:58.018 START TEST raid_superblock_test 00:12:58.018 ************************************ 00:12:58.018 04:50:12 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@357 -- # raid_pid=47118 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@358 -- # waitforlisten 47118 /var/tmp/spdk-raid.sock 00:12:58.018 04:50:12 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:58.018 04:50:12 -- common/autotest_common.sh@819 -- # '[' -z 47118 ']' 00:12:58.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:58.018 04:50:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:58.018 04:50:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:58.018 04:50:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:58.018 04:50:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:58.018 04:50:12 -- common/autotest_common.sh@10 -- # set +x 00:12:58.018 [2024-05-15 04:50:12.161051] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:58.018 [2024-05-15 04:50:12.161283] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47118 ] 00:12:58.276 [2024-05-15 04:50:12.330524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.534 [2024-05-15 04:50:12.572268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.791 [2024-05-15 04:50:12.834809] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.724 04:50:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:59.724 04:50:13 -- common/autotest_common.sh@852 -- # return 0 00:12:59.724 04:50:13 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:12:59.724 04:50:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:59.724 04:50:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:12:59.724 04:50:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:12:59.724 04:50:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:59.724 04:50:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:59.724 04:50:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:59.724 04:50:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:59.724 04:50:13 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:59.724 malloc1 00:12:59.724 04:50:13 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:59.982 [2024-05-15 04:50:14.021340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:59.982 [2024-05-15 04:50:14.021407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.982 [2024-05-15 04:50:14.021454] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:12:59.982 [2024-05-15 04:50:14.021487] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.982 [2024-05-15 04:50:14.023399] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.982 [2024-05-15 04:50:14.023440] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:59.982 pt1 00:12:59.982 04:50:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:59.982 04:50:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:59.982 04:50:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:12:59.982 04:50:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:12:59.982 04:50:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:59.982 04:50:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:59.982 04:50:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:59.982 04:50:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:59.982 04:50:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:59.982 malloc2 00:13:00.241 04:50:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:00.241 [2024-05-15 04:50:14.340810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:00.241 [2024-05-15 04:50:14.340874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.241 [2024-05-15 04:50:14.340919] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:13:00.241 [2024-05-15 04:50:14.340956] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.241 [2024-05-15 04:50:14.342461] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.241 [2024-05-15 04:50:14.342495] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:00.241 pt2 00:13:00.241 04:50:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:00.241 04:50:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:00.241 04:50:14 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:00.498 [2024-05-15 04:50:14.532942] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:00.498 [2024-05-15 04:50:14.534353] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.498 [2024-05-15 04:50:14.534473] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002a380 00:13:00.498 [2024-05-15 04:50:14.534484] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:00.498 [2024-05-15 04:50:14.534593] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:00.498 [2024-05-15 04:50:14.534845] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002a380 00:13:00.498 [2024-05-15 04:50:14.534856] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002a380 00:13:00.498 [2024-05-15 04:50:14.534956] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.498 04:50:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.756 04:50:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:00.756 "name": "raid_bdev1", 00:13:00.756 "uuid": "895f1be3-0571-4dca-ab06-54fc20b838ea", 00:13:00.756 "strip_size_kb": 64, 00:13:00.756 "state": "online", 00:13:00.756 "raid_level": "raid0", 00:13:00.756 "superblock": true, 00:13:00.756 "num_base_bdevs": 2, 00:13:00.756 "num_base_bdevs_discovered": 2, 00:13:00.756 "num_base_bdevs_operational": 2, 00:13:00.756 "base_bdevs_list": [ 00:13:00.756 { 00:13:00.756 "name": "pt1", 00:13:00.756 "uuid": "2c9ecd21-1470-5030-b1a1-22c889c9d335", 00:13:00.756 "is_configured": true, 00:13:00.756 "data_offset": 2048, 00:13:00.756 "data_size": 63488 00:13:00.756 }, 00:13:00.756 { 00:13:00.756 "name": "pt2", 00:13:00.756 "uuid": "269b59a6-9cf0-5d8c-9b7d-31644601ed28", 00:13:00.756 "is_configured": true, 00:13:00.756 "data_offset": 2048, 00:13:00.756 "data_size": 63488 00:13:00.756 } 00:13:00.756 ] 00:13:00.756 }' 00:13:00.756 04:50:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:00.756 04:50:14 -- common/autotest_common.sh@10 -- # set +x 00:13:01.319 04:50:15 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:01.319 04:50:15 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:01.319 [2024-05-15 04:50:15.501066] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.319 04:50:15 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=895f1be3-0571-4dca-ab06-54fc20b838ea 00:13:01.319 04:50:15 -- bdev/bdev_raid.sh@380 -- # '[' -z 895f1be3-0571-4dca-ab06-54fc20b838ea ']' 00:13:01.319 04:50:15 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:01.577 [2024-05-15 04:50:15.724986] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.577 [2024-05-15 04:50:15.725011] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.577 [2024-05-15 04:50:15.725075] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.577 [2024-05-15 04:50:15.725108] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.577 [2024-05-15 04:50:15.725117] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a380 name raid_bdev1, state offline 00:13:01.577 04:50:15 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.577 04:50:15 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:01.835 04:50:15 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:01.835 04:50:15 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:01.835 04:50:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:01.835 04:50:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:02.093 04:50:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:02.093 04:50:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:02.093 04:50:16 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:02.093 04:50:16 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:02.351 04:50:16 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:13:02.351 04:50:16 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:02.351 04:50:16 -- common/autotest_common.sh@640 -- # local es=0 00:13:02.351 04:50:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:02.351 04:50:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.351 04:50:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:02.351 04:50:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.351 04:50:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:02.351 04:50:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.351 04:50:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:02.351 04:50:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.351 04:50:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:02.351 04:50:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:02.351 [2024-05-15 04:50:16.525037] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:02.351 [2024-05-15 04:50:16.526233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:02.351 [2024-05-15 04:50:16.526278] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:13:02.351 [2024-05-15 04:50:16.526335] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:13:02.351 [2024-05-15 04:50:16.526361] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.351 [2024-05-15 04:50:16.526371] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a980 name raid_bdev1, state configuring 00:13:02.351 request: 00:13:02.351 { 00:13:02.351 "name": "raid_bdev1", 00:13:02.351 "raid_level": "raid0", 00:13:02.351 "base_bdevs": [ 00:13:02.351 "malloc1", 00:13:02.351 "malloc2" 00:13:02.351 ], 00:13:02.351 "superblock": false, 00:13:02.351 "strip_size_kb": 64, 00:13:02.351 "method": "bdev_raid_create", 00:13:02.351 "req_id": 1 00:13:02.351 } 00:13:02.351 Got JSON-RPC error response 00:13:02.351 response: 00:13:02.351 { 00:13:02.351 "code": -17, 00:13:02.351 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:02.351 } 00:13:02.351 04:50:16 -- common/autotest_common.sh@643 -- # es=1 00:13:02.351 04:50:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:02.351 04:50:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:02.351 04:50:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:02.351 04:50:16 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:13:02.351 04:50:16 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.609 04:50:16 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:13:02.609 04:50:16 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:13:02.609 04:50:16 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:02.867 [2024-05-15 04:50:16.865067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:02.867 [2024-05-15 04:50:16.865177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.867 [2024-05-15 04:50:16.865219] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:13:02.867 [2024-05-15 04:50:16.865251] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.867 [2024-05-15 04:50:16.866869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.867 [2024-05-15 04:50:16.866909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:02.867 [2024-05-15 04:50:16.867002] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:02.867 [2024-05-15 04:50:16.867055] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:02.867 pt1 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.867 04:50:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.867 04:50:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:02.867 "name": "raid_bdev1", 00:13:02.867 "uuid": "895f1be3-0571-4dca-ab06-54fc20b838ea", 00:13:02.867 "strip_size_kb": 64, 00:13:02.867 "state": "configuring", 00:13:02.867 "raid_level": "raid0", 00:13:02.867 "superblock": true, 00:13:02.867 "num_base_bdevs": 2, 00:13:02.867 "num_base_bdevs_discovered": 1, 00:13:02.867 "num_base_bdevs_operational": 2, 00:13:02.867 "base_bdevs_list": [ 00:13:02.867 { 00:13:02.867 "name": "pt1", 00:13:02.867 "uuid": "2c9ecd21-1470-5030-b1a1-22c889c9d335", 00:13:02.867 "is_configured": true, 00:13:02.867 "data_offset": 2048, 00:13:02.867 "data_size": 63488 00:13:02.867 }, 00:13:02.867 { 00:13:02.867 "name": null, 00:13:02.867 "uuid": "269b59a6-9cf0-5d8c-9b7d-31644601ed28", 00:13:02.867 "is_configured": false, 00:13:02.867 "data_offset": 2048, 00:13:02.867 "data_size": 63488 00:13:02.867 } 00:13:02.867 ] 00:13:02.867 }' 00:13:02.867 04:50:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:02.867 04:50:17 -- common/autotest_common.sh@10 -- # set +x 00:13:03.433 04:50:17 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:13:03.433 04:50:17 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:03.433 04:50:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:03.433 04:50:17 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:03.691 [2024-05-15 04:50:17.777167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:03.691 [2024-05-15 04:50:17.777265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.691 [2024-05-15 04:50:17.777317] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d380 00:13:03.691 [2024-05-15 04:50:17.777342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.691 [2024-05-15 04:50:17.777699] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.691 [2024-05-15 04:50:17.777900] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:03.691 [2024-05-15 04:50:17.778002] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:03.691 [2024-05-15 04:50:17.778026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:03.691 [2024-05-15 04:50:17.778107] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002cd80 00:13:03.691 [2024-05-15 04:50:17.778115] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:03.691 [2024-05-15 04:50:17.778211] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:13:03.691 [2024-05-15 04:50:17.778407] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002cd80 00:13:03.691 [2024-05-15 04:50:17.778416] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002cd80 00:13:03.691 [2024-05-15 04:50:17.778496] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.691 pt2 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.691 04:50:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.949 04:50:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:03.949 "name": "raid_bdev1", 00:13:03.949 "uuid": "895f1be3-0571-4dca-ab06-54fc20b838ea", 00:13:03.949 "strip_size_kb": 64, 00:13:03.949 "state": "online", 00:13:03.949 "raid_level": "raid0", 00:13:03.949 "superblock": true, 00:13:03.949 "num_base_bdevs": 2, 00:13:03.949 "num_base_bdevs_discovered": 2, 00:13:03.949 "num_base_bdevs_operational": 2, 00:13:03.949 "base_bdevs_list": [ 00:13:03.949 { 00:13:03.949 "name": "pt1", 00:13:03.949 "uuid": "2c9ecd21-1470-5030-b1a1-22c889c9d335", 00:13:03.949 "is_configured": true, 00:13:03.949 "data_offset": 2048, 00:13:03.949 "data_size": 63488 00:13:03.949 }, 00:13:03.949 { 00:13:03.949 "name": "pt2", 00:13:03.949 "uuid": "269b59a6-9cf0-5d8c-9b7d-31644601ed28", 00:13:03.949 "is_configured": true, 00:13:03.949 "data_offset": 2048, 00:13:03.949 "data_size": 63488 00:13:03.949 } 00:13:03.949 ] 00:13:03.949 }' 00:13:03.949 04:50:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:03.949 04:50:18 -- common/autotest_common.sh@10 -- # set +x 00:13:04.515 04:50:18 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:04.515 04:50:18 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:04.515 [2024-05-15 04:50:18.609344] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.515 04:50:18 -- bdev/bdev_raid.sh@430 -- # '[' 895f1be3-0571-4dca-ab06-54fc20b838ea '!=' 895f1be3-0571-4dca-ab06-54fc20b838ea ']' 00:13:04.515 04:50:18 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:13:04.515 04:50:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:04.515 04:50:18 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:04.515 04:50:18 -- bdev/bdev_raid.sh@511 -- # killprocess 47118 00:13:04.515 04:50:18 -- common/autotest_common.sh@926 -- # '[' -z 47118 ']' 00:13:04.515 04:50:18 -- common/autotest_common.sh@930 -- # kill -0 47118 00:13:04.515 04:50:18 -- common/autotest_common.sh@931 -- # uname 00:13:04.515 04:50:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:04.515 04:50:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 47118 00:13:04.515 killing process with pid 47118 00:13:04.515 04:50:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:04.515 04:50:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:04.515 04:50:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47118' 00:13:04.515 04:50:18 -- common/autotest_common.sh@945 -- # kill 47118 00:13:04.515 04:50:18 -- common/autotest_common.sh@950 -- # wait 47118 00:13:04.515 [2024-05-15 04:50:18.653052] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:04.515 [2024-05-15 04:50:18.653112] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.515 [2024-05-15 04:50:18.653143] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.515 [2024-05-15 04:50:18.653152] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002cd80 name raid_bdev1, state offline 00:13:04.773 [2024-05-15 04:50:18.852558] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.147 ************************************ 00:13:06.147 END TEST raid_superblock_test 00:13:06.147 ************************************ 00:13:06.147 04:50:20 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:06.147 00:13:06.147 real 0m8.263s 00:13:06.147 user 0m12.871s 00:13:06.147 sys 0m1.108s 00:13:06.148 04:50:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.148 04:50:20 -- common/autotest_common.sh@10 -- # set +x 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:13:06.148 04:50:20 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:06.148 04:50:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:06.148 04:50:20 -- common/autotest_common.sh@10 -- # set +x 00:13:06.148 ************************************ 00:13:06.148 START TEST raid_state_function_test 00:13:06.148 ************************************ 00:13:06.148 04:50:20 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:06.148 Process raid pid: 47372 00:13:06.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=47372 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 47372' 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 47372 /var/tmp/spdk-raid.sock 00:13:06.148 04:50:20 -- common/autotest_common.sh@819 -- # '[' -z 47372 ']' 00:13:06.148 04:50:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:06.148 04:50:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:06.148 04:50:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:06.148 04:50:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:06.148 04:50:20 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:06.148 04:50:20 -- common/autotest_common.sh@10 -- # set +x 00:13:06.406 [2024-05-15 04:50:20.476629] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:06.406 [2024-05-15 04:50:20.476983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.406 [2024-05-15 04:50:20.631268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.680 [2024-05-15 04:50:20.860558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.967 [2024-05-15 04:50:21.125879] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.902 04:50:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:07.902 04:50:21 -- common/autotest_common.sh@852 -- # return 0 00:13:07.902 04:50:21 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:08.161 [2024-05-15 04:50:22.159869] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:08.161 [2024-05-15 04:50:22.159937] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:08.161 [2024-05-15 04:50:22.159948] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:08.161 [2024-05-15 04:50:22.159983] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.161 04:50:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:08.161 "name": "Existed_Raid", 00:13:08.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.162 "strip_size_kb": 64, 00:13:08.162 "state": "configuring", 00:13:08.162 "raid_level": "concat", 00:13:08.162 "superblock": false, 00:13:08.162 "num_base_bdevs": 2, 00:13:08.162 "num_base_bdevs_discovered": 0, 00:13:08.162 "num_base_bdevs_operational": 2, 00:13:08.162 "base_bdevs_list": [ 00:13:08.162 { 00:13:08.162 "name": "BaseBdev1", 00:13:08.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.162 "is_configured": false, 00:13:08.162 "data_offset": 0, 00:13:08.162 "data_size": 0 00:13:08.162 }, 00:13:08.162 { 00:13:08.162 "name": "BaseBdev2", 00:13:08.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.162 "is_configured": false, 00:13:08.162 "data_offset": 0, 00:13:08.162 "data_size": 0 00:13:08.162 } 00:13:08.162 ] 00:13:08.162 }' 00:13:08.162 04:50:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:08.162 04:50:22 -- common/autotest_common.sh@10 -- # set +x 00:13:08.729 04:50:22 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:08.988 [2024-05-15 04:50:23.155922] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.988 [2024-05-15 04:50:23.155959] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:08.988 04:50:23 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:09.246 [2024-05-15 04:50:23.303930] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.246 [2024-05-15 04:50:23.303998] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.246 [2024-05-15 04:50:23.304008] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.246 [2024-05-15 04:50:23.304032] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.246 04:50:23 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:09.505 [2024-05-15 04:50:23.498205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.505 BaseBdev1 00:13:09.505 04:50:23 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:09.505 04:50:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:09.505 04:50:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:09.505 04:50:23 -- common/autotest_common.sh@889 -- # local i 00:13:09.505 04:50:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:09.505 04:50:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:09.505 04:50:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:09.505 04:50:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:09.764 [ 00:13:09.764 { 00:13:09.764 "name": "BaseBdev1", 00:13:09.764 "aliases": [ 00:13:09.764 "a6f5fe3e-6082-407b-9584-d426884b302a" 00:13:09.764 ], 00:13:09.764 "product_name": "Malloc disk", 00:13:09.764 "block_size": 512, 00:13:09.764 "num_blocks": 65536, 00:13:09.764 "uuid": "a6f5fe3e-6082-407b-9584-d426884b302a", 00:13:09.764 "assigned_rate_limits": { 00:13:09.764 "rw_ios_per_sec": 0, 00:13:09.764 "rw_mbytes_per_sec": 0, 00:13:09.764 "r_mbytes_per_sec": 0, 00:13:09.764 "w_mbytes_per_sec": 0 00:13:09.764 }, 00:13:09.764 "claimed": true, 00:13:09.764 "claim_type": "exclusive_write", 00:13:09.764 "zoned": false, 00:13:09.764 "supported_io_types": { 00:13:09.764 "read": true, 00:13:09.764 "write": true, 00:13:09.764 "unmap": true, 00:13:09.764 "write_zeroes": true, 00:13:09.764 "flush": true, 00:13:09.764 "reset": true, 00:13:09.764 "compare": false, 00:13:09.764 "compare_and_write": false, 00:13:09.764 "abort": true, 00:13:09.764 "nvme_admin": false, 00:13:09.764 "nvme_io": false 00:13:09.764 }, 00:13:09.764 "memory_domains": [ 00:13:09.764 { 00:13:09.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.764 "dma_device_type": 2 00:13:09.764 } 00:13:09.764 ], 00:13:09.764 "driver_specific": {} 00:13:09.764 } 00:13:09.764 ] 00:13:09.764 04:50:23 -- common/autotest_common.sh@895 -- # return 0 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.764 04:50:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.023 04:50:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:10.023 "name": "Existed_Raid", 00:13:10.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.023 "strip_size_kb": 64, 00:13:10.023 "state": "configuring", 00:13:10.023 "raid_level": "concat", 00:13:10.023 "superblock": false, 00:13:10.023 "num_base_bdevs": 2, 00:13:10.023 "num_base_bdevs_discovered": 1, 00:13:10.023 "num_base_bdevs_operational": 2, 00:13:10.023 "base_bdevs_list": [ 00:13:10.023 { 00:13:10.023 "name": "BaseBdev1", 00:13:10.023 "uuid": "a6f5fe3e-6082-407b-9584-d426884b302a", 00:13:10.023 "is_configured": true, 00:13:10.023 "data_offset": 0, 00:13:10.023 "data_size": 65536 00:13:10.023 }, 00:13:10.023 { 00:13:10.023 "name": "BaseBdev2", 00:13:10.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.023 "is_configured": false, 00:13:10.023 "data_offset": 0, 00:13:10.023 "data_size": 0 00:13:10.023 } 00:13:10.023 ] 00:13:10.023 }' 00:13:10.023 04:50:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:10.023 04:50:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.590 04:50:24 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:10.590 [2024-05-15 04:50:24.786287] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.590 [2024-05-15 04:50:24.786328] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:13:10.590 04:50:24 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:10.590 04:50:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:10.849 [2024-05-15 04:50:24.926370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.849 [2024-05-15 04:50:24.927571] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.849 [2024-05-15 04:50:24.927624] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.849 04:50:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.109 04:50:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:11.109 "name": "Existed_Raid", 00:13:11.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.109 "strip_size_kb": 64, 00:13:11.109 "state": "configuring", 00:13:11.109 "raid_level": "concat", 00:13:11.109 "superblock": false, 00:13:11.109 "num_base_bdevs": 2, 00:13:11.109 "num_base_bdevs_discovered": 1, 00:13:11.109 "num_base_bdevs_operational": 2, 00:13:11.109 "base_bdevs_list": [ 00:13:11.109 { 00:13:11.109 "name": "BaseBdev1", 00:13:11.109 "uuid": "a6f5fe3e-6082-407b-9584-d426884b302a", 00:13:11.109 "is_configured": true, 00:13:11.109 "data_offset": 0, 00:13:11.109 "data_size": 65536 00:13:11.109 }, 00:13:11.109 { 00:13:11.109 "name": "BaseBdev2", 00:13:11.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.109 "is_configured": false, 00:13:11.109 "data_offset": 0, 00:13:11.109 "data_size": 0 00:13:11.109 } 00:13:11.109 ] 00:13:11.109 }' 00:13:11.109 04:50:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:11.109 04:50:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 04:50:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:11.936 [2024-05-15 04:50:25.978636] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.936 [2024-05-15 04:50:25.978680] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027f80 00:13:11.936 [2024-05-15 04:50:25.978689] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:11.936 [2024-05-15 04:50:25.980174] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:11.936 BaseBdev2 00:13:11.936 [2024-05-15 04:50:25.981304] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027f80 00:13:11.936 [2024-05-15 04:50:25.981352] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027f80 00:13:11.936 [2024-05-15 04:50:25.981928] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.936 04:50:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:11.936 04:50:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:11.936 04:50:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:11.936 04:50:25 -- common/autotest_common.sh@889 -- # local i 00:13:11.936 04:50:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:11.936 04:50:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:11.936 04:50:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:12.195 04:50:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:12.195 [ 00:13:12.195 { 00:13:12.195 "name": "BaseBdev2", 00:13:12.196 "aliases": [ 00:13:12.196 "6e5e7369-e38f-4bf9-89a3-87ae192fdc13" 00:13:12.196 ], 00:13:12.196 "product_name": "Malloc disk", 00:13:12.196 "block_size": 512, 00:13:12.196 "num_blocks": 65536, 00:13:12.196 "uuid": "6e5e7369-e38f-4bf9-89a3-87ae192fdc13", 00:13:12.196 "assigned_rate_limits": { 00:13:12.196 "rw_ios_per_sec": 0, 00:13:12.196 "rw_mbytes_per_sec": 0, 00:13:12.196 "r_mbytes_per_sec": 0, 00:13:12.196 "w_mbytes_per_sec": 0 00:13:12.196 }, 00:13:12.196 "claimed": true, 00:13:12.196 "claim_type": "exclusive_write", 00:13:12.196 "zoned": false, 00:13:12.196 "supported_io_types": { 00:13:12.196 "read": true, 00:13:12.196 "write": true, 00:13:12.196 "unmap": true, 00:13:12.196 "write_zeroes": true, 00:13:12.196 "flush": true, 00:13:12.196 "reset": true, 00:13:12.196 "compare": false, 00:13:12.196 "compare_and_write": false, 00:13:12.196 "abort": true, 00:13:12.196 "nvme_admin": false, 00:13:12.196 "nvme_io": false 00:13:12.196 }, 00:13:12.196 "memory_domains": [ 00:13:12.196 { 00:13:12.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.196 "dma_device_type": 2 00:13:12.196 } 00:13:12.196 ], 00:13:12.196 "driver_specific": {} 00:13:12.196 } 00:13:12.196 ] 00:13:12.196 04:50:26 -- common/autotest_common.sh@895 -- # return 0 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.196 04:50:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.455 04:50:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:12.455 "name": "Existed_Raid", 00:13:12.455 "uuid": "7c44d7d4-d836-42a1-8718-d55daba4f229", 00:13:12.455 "strip_size_kb": 64, 00:13:12.455 "state": "online", 00:13:12.455 "raid_level": "concat", 00:13:12.455 "superblock": false, 00:13:12.455 "num_base_bdevs": 2, 00:13:12.455 "num_base_bdevs_discovered": 2, 00:13:12.455 "num_base_bdevs_operational": 2, 00:13:12.455 "base_bdevs_list": [ 00:13:12.455 { 00:13:12.455 "name": "BaseBdev1", 00:13:12.455 "uuid": "a6f5fe3e-6082-407b-9584-d426884b302a", 00:13:12.455 "is_configured": true, 00:13:12.455 "data_offset": 0, 00:13:12.455 "data_size": 65536 00:13:12.455 }, 00:13:12.455 { 00:13:12.455 "name": "BaseBdev2", 00:13:12.455 "uuid": "6e5e7369-e38f-4bf9-89a3-87ae192fdc13", 00:13:12.455 "is_configured": true, 00:13:12.455 "data_offset": 0, 00:13:12.455 "data_size": 65536 00:13:12.455 } 00:13:12.455 ] 00:13:12.455 }' 00:13:12.455 04:50:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:12.455 04:50:26 -- common/autotest_common.sh@10 -- # set +x 00:13:13.023 04:50:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:13.023 [2024-05-15 04:50:27.230905] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:13.023 [2024-05-15 04:50:27.230935] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.023 [2024-05-15 04:50:27.230982] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.282 04:50:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.541 04:50:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:13.541 "name": "Existed_Raid", 00:13:13.541 "uuid": "7c44d7d4-d836-42a1-8718-d55daba4f229", 00:13:13.541 "strip_size_kb": 64, 00:13:13.541 "state": "offline", 00:13:13.541 "raid_level": "concat", 00:13:13.541 "superblock": false, 00:13:13.541 "num_base_bdevs": 2, 00:13:13.541 "num_base_bdevs_discovered": 1, 00:13:13.541 "num_base_bdevs_operational": 1, 00:13:13.541 "base_bdevs_list": [ 00:13:13.541 { 00:13:13.541 "name": null, 00:13:13.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.541 "is_configured": false, 00:13:13.541 "data_offset": 0, 00:13:13.541 "data_size": 65536 00:13:13.541 }, 00:13:13.541 { 00:13:13.541 "name": "BaseBdev2", 00:13:13.541 "uuid": "6e5e7369-e38f-4bf9-89a3-87ae192fdc13", 00:13:13.541 "is_configured": true, 00:13:13.541 "data_offset": 0, 00:13:13.541 "data_size": 65536 00:13:13.541 } 00:13:13.541 ] 00:13:13.541 }' 00:13:13.541 04:50:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:13.541 04:50:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.109 04:50:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:14.109 04:50:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:14.109 04:50:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:14.109 04:50:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.109 04:50:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:14.109 04:50:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.109 04:50:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:14.369 [2024-05-15 04:50:28.438471] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:14.369 [2024-05-15 04:50:28.438533] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027f80 name Existed_Raid, state offline 00:13:14.369 04:50:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:14.369 04:50:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:14.369 04:50:28 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.369 04:50:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:14.628 04:50:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:14.628 04:50:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:14.628 04:50:28 -- bdev/bdev_raid.sh@287 -- # killprocess 47372 00:13:14.628 04:50:28 -- common/autotest_common.sh@926 -- # '[' -z 47372 ']' 00:13:14.628 04:50:28 -- common/autotest_common.sh@930 -- # kill -0 47372 00:13:14.628 04:50:28 -- common/autotest_common.sh@931 -- # uname 00:13:14.628 04:50:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:14.628 04:50:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 47372 00:13:14.628 killing process with pid 47372 00:13:14.628 04:50:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:14.628 04:50:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:14.628 04:50:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47372' 00:13:14.628 04:50:28 -- common/autotest_common.sh@945 -- # kill 47372 00:13:14.628 04:50:28 -- common/autotest_common.sh@950 -- # wait 47372 00:13:14.628 [2024-05-15 04:50:28.798807] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.628 [2024-05-15 04:50:28.798919] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.532 ************************************ 00:13:16.532 END TEST raid_state_function_test 00:13:16.532 ************************************ 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:16.532 00:13:16.532 real 0m9.907s 00:13:16.532 user 0m16.082s 00:13:16.532 sys 0m1.264s 00:13:16.532 04:50:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.532 04:50:30 -- common/autotest_common.sh@10 -- # set +x 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:13:16.532 04:50:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:16.532 04:50:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:16.532 04:50:30 -- common/autotest_common.sh@10 -- # set +x 00:13:16.532 ************************************ 00:13:16.532 START TEST raid_state_function_test_sb 00:13:16.532 ************************************ 00:13:16.532 04:50:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:16.532 Process raid pid: 47696 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=47696 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 47696' 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 47696 /var/tmp/spdk-raid.sock 00:13:16.532 04:50:30 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:16.532 04:50:30 -- common/autotest_common.sh@819 -- # '[' -z 47696 ']' 00:13:16.532 04:50:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:16.532 04:50:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:16.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:16.532 04:50:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:16.532 04:50:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:16.533 04:50:30 -- common/autotest_common.sh@10 -- # set +x 00:13:16.533 [2024-05-15 04:50:30.446778] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:16.533 [2024-05-15 04:50:30.447001] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.533 [2024-05-15 04:50:30.611364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.792 [2024-05-15 04:50:30.836966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.051 [2024-05-15 04:50:31.098246] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.988 04:50:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:17.988 04:50:31 -- common/autotest_common.sh@852 -- # return 0 00:13:17.988 04:50:31 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:17.988 [2024-05-15 04:50:32.100179] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.988 [2024-05-15 04:50:32.100251] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.988 [2024-05-15 04:50:32.100262] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:17.988 [2024-05-15 04:50:32.100297] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.988 04:50:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.247 04:50:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:18.247 "name": "Existed_Raid", 00:13:18.247 "uuid": "41c5ac5c-1625-4855-a85d-9328414f1bde", 00:13:18.247 "strip_size_kb": 64, 00:13:18.247 "state": "configuring", 00:13:18.247 "raid_level": "concat", 00:13:18.247 "superblock": true, 00:13:18.247 "num_base_bdevs": 2, 00:13:18.247 "num_base_bdevs_discovered": 0, 00:13:18.247 "num_base_bdevs_operational": 2, 00:13:18.247 "base_bdevs_list": [ 00:13:18.247 { 00:13:18.247 "name": "BaseBdev1", 00:13:18.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.247 "is_configured": false, 00:13:18.247 "data_offset": 0, 00:13:18.247 "data_size": 0 00:13:18.247 }, 00:13:18.247 { 00:13:18.247 "name": "BaseBdev2", 00:13:18.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.247 "is_configured": false, 00:13:18.247 "data_offset": 0, 00:13:18.247 "data_size": 0 00:13:18.247 } 00:13:18.247 ] 00:13:18.247 }' 00:13:18.247 04:50:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:18.247 04:50:32 -- common/autotest_common.sh@10 -- # set +x 00:13:18.814 04:50:32 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:18.814 [2024-05-15 04:50:32.980397] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.814 [2024-05-15 04:50:32.980441] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:18.814 04:50:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:19.075 [2024-05-15 04:50:33.176443] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.075 [2024-05-15 04:50:33.176508] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.075 [2024-05-15 04:50:33.176517] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.075 [2024-05-15 04:50:33.176543] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.075 04:50:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.334 [2024-05-15 04:50:33.362634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.334 BaseBdev1 00:13:19.334 04:50:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:19.334 04:50:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:19.334 04:50:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:19.334 04:50:33 -- common/autotest_common.sh@889 -- # local i 00:13:19.334 04:50:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:19.334 04:50:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:19.334 04:50:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:19.334 04:50:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:19.592 [ 00:13:19.592 { 00:13:19.592 "name": "BaseBdev1", 00:13:19.592 "aliases": [ 00:13:19.592 "56a22db9-79fc-4338-b1a6-e6edabb383d7" 00:13:19.592 ], 00:13:19.592 "product_name": "Malloc disk", 00:13:19.592 "block_size": 512, 00:13:19.592 "num_blocks": 65536, 00:13:19.592 "uuid": "56a22db9-79fc-4338-b1a6-e6edabb383d7", 00:13:19.592 "assigned_rate_limits": { 00:13:19.592 "rw_ios_per_sec": 0, 00:13:19.592 "rw_mbytes_per_sec": 0, 00:13:19.592 "r_mbytes_per_sec": 0, 00:13:19.592 "w_mbytes_per_sec": 0 00:13:19.592 }, 00:13:19.592 "claimed": true, 00:13:19.592 "claim_type": "exclusive_write", 00:13:19.592 "zoned": false, 00:13:19.592 "supported_io_types": { 00:13:19.592 "read": true, 00:13:19.592 "write": true, 00:13:19.592 "unmap": true, 00:13:19.592 "write_zeroes": true, 00:13:19.592 "flush": true, 00:13:19.592 "reset": true, 00:13:19.592 "compare": false, 00:13:19.592 "compare_and_write": false, 00:13:19.592 "abort": true, 00:13:19.592 "nvme_admin": false, 00:13:19.592 "nvme_io": false 00:13:19.592 }, 00:13:19.592 "memory_domains": [ 00:13:19.592 { 00:13:19.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.592 "dma_device_type": 2 00:13:19.592 } 00:13:19.592 ], 00:13:19.592 "driver_specific": {} 00:13:19.592 } 00:13:19.592 ] 00:13:19.592 04:50:33 -- common/autotest_common.sh@895 -- # return 0 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.592 04:50:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.851 04:50:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:19.851 "name": "Existed_Raid", 00:13:19.851 "uuid": "03cdaec4-ce32-43c9-919c-ee5dc44f1a04", 00:13:19.851 "strip_size_kb": 64, 00:13:19.851 "state": "configuring", 00:13:19.851 "raid_level": "concat", 00:13:19.851 "superblock": true, 00:13:19.851 "num_base_bdevs": 2, 00:13:19.851 "num_base_bdevs_discovered": 1, 00:13:19.851 "num_base_bdevs_operational": 2, 00:13:19.851 "base_bdevs_list": [ 00:13:19.851 { 00:13:19.851 "name": "BaseBdev1", 00:13:19.851 "uuid": "56a22db9-79fc-4338-b1a6-e6edabb383d7", 00:13:19.851 "is_configured": true, 00:13:19.851 "data_offset": 2048, 00:13:19.851 "data_size": 63488 00:13:19.851 }, 00:13:19.851 { 00:13:19.851 "name": "BaseBdev2", 00:13:19.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.851 "is_configured": false, 00:13:19.851 "data_offset": 0, 00:13:19.851 "data_size": 0 00:13:19.851 } 00:13:19.851 ] 00:13:19.851 }' 00:13:19.851 04:50:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:19.851 04:50:33 -- common/autotest_common.sh@10 -- # set +x 00:13:20.419 04:50:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:20.677 [2024-05-15 04:50:34.670753] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:20.677 [2024-05-15 04:50:34.670798] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:13:20.677 04:50:34 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:20.677 04:50:34 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:20.936 04:50:34 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:20.936 BaseBdev1 00:13:20.936 04:50:35 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:20.936 04:50:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:20.936 04:50:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:20.936 04:50:35 -- common/autotest_common.sh@889 -- # local i 00:13:20.936 04:50:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:20.936 04:50:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:20.936 04:50:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:21.195 04:50:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.454 [ 00:13:21.454 { 00:13:21.454 "name": "BaseBdev1", 00:13:21.454 "aliases": [ 00:13:21.454 "e2ac3ba7-55b9-438c-8c3d-9b684c00c0b4" 00:13:21.454 ], 00:13:21.454 "product_name": "Malloc disk", 00:13:21.454 "block_size": 512, 00:13:21.454 "num_blocks": 65536, 00:13:21.454 "uuid": "e2ac3ba7-55b9-438c-8c3d-9b684c00c0b4", 00:13:21.454 "assigned_rate_limits": { 00:13:21.454 "rw_ios_per_sec": 0, 00:13:21.454 "rw_mbytes_per_sec": 0, 00:13:21.454 "r_mbytes_per_sec": 0, 00:13:21.454 "w_mbytes_per_sec": 0 00:13:21.454 }, 00:13:21.454 "claimed": false, 00:13:21.454 "zoned": false, 00:13:21.454 "supported_io_types": { 00:13:21.454 "read": true, 00:13:21.454 "write": true, 00:13:21.454 "unmap": true, 00:13:21.454 "write_zeroes": true, 00:13:21.454 "flush": true, 00:13:21.454 "reset": true, 00:13:21.454 "compare": false, 00:13:21.454 "compare_and_write": false, 00:13:21.454 "abort": true, 00:13:21.454 "nvme_admin": false, 00:13:21.454 "nvme_io": false 00:13:21.454 }, 00:13:21.454 "memory_domains": [ 00:13:21.454 { 00:13:21.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.454 "dma_device_type": 2 00:13:21.454 } 00:13:21.454 ], 00:13:21.454 "driver_specific": {} 00:13:21.454 } 00:13:21.455 ] 00:13:21.455 04:50:35 -- common/autotest_common.sh@895 -- # return 0 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:21.455 [2024-05-15 04:50:35.597674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.455 [2024-05-15 04:50:35.599219] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.455 [2024-05-15 04:50:35.599272] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.455 04:50:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.713 04:50:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:21.713 "name": "Existed_Raid", 00:13:21.713 "uuid": "b5080eea-7da6-4cbd-a0d3-d12f7cb549e2", 00:13:21.713 "strip_size_kb": 64, 00:13:21.713 "state": "configuring", 00:13:21.713 "raid_level": "concat", 00:13:21.713 "superblock": true, 00:13:21.713 "num_base_bdevs": 2, 00:13:21.713 "num_base_bdevs_discovered": 1, 00:13:21.713 "num_base_bdevs_operational": 2, 00:13:21.713 "base_bdevs_list": [ 00:13:21.713 { 00:13:21.713 "name": "BaseBdev1", 00:13:21.713 "uuid": "e2ac3ba7-55b9-438c-8c3d-9b684c00c0b4", 00:13:21.713 "is_configured": true, 00:13:21.713 "data_offset": 2048, 00:13:21.713 "data_size": 63488 00:13:21.713 }, 00:13:21.713 { 00:13:21.713 "name": "BaseBdev2", 00:13:21.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.713 "is_configured": false, 00:13:21.713 "data_offset": 0, 00:13:21.713 "data_size": 0 00:13:21.713 } 00:13:21.713 ] 00:13:21.713 }' 00:13:21.714 04:50:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:21.714 04:50:35 -- common/autotest_common.sh@10 -- # set +x 00:13:22.337 04:50:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.597 [2024-05-15 04:50:36.677428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.597 [2024-05-15 04:50:36.677600] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:13:22.597 [2024-05-15 04:50:36.677612] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:22.597 [2024-05-15 04:50:36.677711] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:22.597 [2024-05-15 04:50:36.677957] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:13:22.597 [2024-05-15 04:50:36.677968] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:13:22.597 BaseBdev2 00:13:22.597 [2024-05-15 04:50:36.678068] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.597 04:50:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:22.597 04:50:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:22.597 04:50:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:22.597 04:50:36 -- common/autotest_common.sh@889 -- # local i 00:13:22.597 04:50:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:22.597 04:50:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:22.597 04:50:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:22.855 04:50:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.855 [ 00:13:22.855 { 00:13:22.855 "name": "BaseBdev2", 00:13:22.855 "aliases": [ 00:13:22.855 "bf2ba677-85b7-4fed-b64d-d59a876c3f5c" 00:13:22.855 ], 00:13:22.855 "product_name": "Malloc disk", 00:13:22.855 "block_size": 512, 00:13:22.855 "num_blocks": 65536, 00:13:22.855 "uuid": "bf2ba677-85b7-4fed-b64d-d59a876c3f5c", 00:13:22.855 "assigned_rate_limits": { 00:13:22.855 "rw_ios_per_sec": 0, 00:13:22.855 "rw_mbytes_per_sec": 0, 00:13:22.855 "r_mbytes_per_sec": 0, 00:13:22.855 "w_mbytes_per_sec": 0 00:13:22.855 }, 00:13:22.855 "claimed": true, 00:13:22.855 "claim_type": "exclusive_write", 00:13:22.855 "zoned": false, 00:13:22.855 "supported_io_types": { 00:13:22.855 "read": true, 00:13:22.855 "write": true, 00:13:22.855 "unmap": true, 00:13:22.855 "write_zeroes": true, 00:13:22.855 "flush": true, 00:13:22.855 "reset": true, 00:13:22.855 "compare": false, 00:13:22.855 "compare_and_write": false, 00:13:22.855 "abort": true, 00:13:22.855 "nvme_admin": false, 00:13:22.855 "nvme_io": false 00:13:22.855 }, 00:13:22.855 "memory_domains": [ 00:13:22.855 { 00:13:22.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.855 "dma_device_type": 2 00:13:22.855 } 00:13:22.855 ], 00:13:22.855 "driver_specific": {} 00:13:22.855 } 00:13:22.855 ] 00:13:22.855 04:50:37 -- common/autotest_common.sh@895 -- # return 0 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.855 04:50:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.114 04:50:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:23.114 "name": "Existed_Raid", 00:13:23.114 "uuid": "b5080eea-7da6-4cbd-a0d3-d12f7cb549e2", 00:13:23.114 "strip_size_kb": 64, 00:13:23.114 "state": "online", 00:13:23.114 "raid_level": "concat", 00:13:23.114 "superblock": true, 00:13:23.114 "num_base_bdevs": 2, 00:13:23.114 "num_base_bdevs_discovered": 2, 00:13:23.114 "num_base_bdevs_operational": 2, 00:13:23.114 "base_bdevs_list": [ 00:13:23.114 { 00:13:23.114 "name": "BaseBdev1", 00:13:23.114 "uuid": "e2ac3ba7-55b9-438c-8c3d-9b684c00c0b4", 00:13:23.114 "is_configured": true, 00:13:23.114 "data_offset": 2048, 00:13:23.114 "data_size": 63488 00:13:23.114 }, 00:13:23.114 { 00:13:23.114 "name": "BaseBdev2", 00:13:23.114 "uuid": "bf2ba677-85b7-4fed-b64d-d59a876c3f5c", 00:13:23.114 "is_configured": true, 00:13:23.114 "data_offset": 2048, 00:13:23.114 "data_size": 63488 00:13:23.114 } 00:13:23.114 ] 00:13:23.114 }' 00:13:23.114 04:50:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:23.114 04:50:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.681 04:50:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:23.940 [2024-05-15 04:50:37.985640] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.940 [2024-05-15 04:50:37.985672] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.940 [2024-05-15 04:50:37.985728] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.940 04:50:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.198 04:50:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:24.198 "name": "Existed_Raid", 00:13:24.198 "uuid": "b5080eea-7da6-4cbd-a0d3-d12f7cb549e2", 00:13:24.198 "strip_size_kb": 64, 00:13:24.198 "state": "offline", 00:13:24.198 "raid_level": "concat", 00:13:24.198 "superblock": true, 00:13:24.198 "num_base_bdevs": 2, 00:13:24.198 "num_base_bdevs_discovered": 1, 00:13:24.198 "num_base_bdevs_operational": 1, 00:13:24.198 "base_bdevs_list": [ 00:13:24.198 { 00:13:24.198 "name": null, 00:13:24.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.198 "is_configured": false, 00:13:24.198 "data_offset": 2048, 00:13:24.198 "data_size": 63488 00:13:24.198 }, 00:13:24.198 { 00:13:24.198 "name": "BaseBdev2", 00:13:24.198 "uuid": "bf2ba677-85b7-4fed-b64d-d59a876c3f5c", 00:13:24.198 "is_configured": true, 00:13:24.198 "data_offset": 2048, 00:13:24.198 "data_size": 63488 00:13:24.198 } 00:13:24.198 ] 00:13:24.198 }' 00:13:24.198 04:50:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:24.198 04:50:38 -- common/autotest_common.sh@10 -- # set +x 00:13:24.764 04:50:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:24.764 04:50:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:24.764 04:50:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.764 04:50:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:25.023 04:50:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:25.023 04:50:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.023 04:50:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:25.281 [2024-05-15 04:50:39.319146] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.281 [2024-05-15 04:50:39.319218] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:13:25.281 04:50:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:25.281 04:50:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:25.281 04:50:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.281 04:50:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:25.540 04:50:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:25.540 04:50:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:25.540 04:50:39 -- bdev/bdev_raid.sh@287 -- # killprocess 47696 00:13:25.540 04:50:39 -- common/autotest_common.sh@926 -- # '[' -z 47696 ']' 00:13:25.540 04:50:39 -- common/autotest_common.sh@930 -- # kill -0 47696 00:13:25.540 04:50:39 -- common/autotest_common.sh@931 -- # uname 00:13:25.540 04:50:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:25.540 04:50:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 47696 00:13:25.540 killing process with pid 47696 00:13:25.540 04:50:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:25.540 04:50:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:25.540 04:50:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47696' 00:13:25.540 04:50:39 -- common/autotest_common.sh@945 -- # kill 47696 00:13:25.540 04:50:39 -- common/autotest_common.sh@950 -- # wait 47696 00:13:25.540 [2024-05-15 04:50:39.679896] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.540 [2024-05-15 04:50:39.680001] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.914 ************************************ 00:13:26.914 END TEST raid_state_function_test_sb 00:13:26.914 ************************************ 00:13:26.914 04:50:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:26.914 00:13:26.914 real 0m10.808s 00:13:26.914 user 0m17.674s 00:13:26.914 sys 0m1.373s 00:13:26.914 04:50:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.914 04:50:41 -- common/autotest_common.sh@10 -- # set +x 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:13:27.173 04:50:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:27.173 04:50:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:27.173 04:50:41 -- common/autotest_common.sh@10 -- # set +x 00:13:27.173 ************************************ 00:13:27.173 START TEST raid_superblock_test 00:13:27.173 ************************************ 00:13:27.173 04:50:41 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:13:27.173 04:50:41 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:13:27.174 04:50:41 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:13:27.174 04:50:41 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:13:27.174 04:50:41 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:13:27.174 04:50:41 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:13:27.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:27.174 04:50:41 -- bdev/bdev_raid.sh@357 -- # raid_pid=48031 00:13:27.174 04:50:41 -- bdev/bdev_raid.sh@358 -- # waitforlisten 48031 /var/tmp/spdk-raid.sock 00:13:27.174 04:50:41 -- common/autotest_common.sh@819 -- # '[' -z 48031 ']' 00:13:27.174 04:50:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:27.174 04:50:41 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:27.174 04:50:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:27.174 04:50:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:27.174 04:50:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:27.174 04:50:41 -- common/autotest_common.sh@10 -- # set +x 00:13:27.174 [2024-05-15 04:50:41.316622] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:27.174 [2024-05-15 04:50:41.317115] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48031 ] 00:13:27.433 [2024-05-15 04:50:41.503965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.692 [2024-05-15 04:50:41.775939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.951 [2024-05-15 04:50:42.042899] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.887 04:50:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:28.887 04:50:42 -- common/autotest_common.sh@852 -- # return 0 00:13:28.887 04:50:42 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:13:28.887 04:50:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:28.887 04:50:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:13:28.887 04:50:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:13:28.887 04:50:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:28.887 04:50:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:28.887 04:50:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:28.887 04:50:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:28.887 04:50:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:28.887 malloc1 00:13:28.887 04:50:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:29.146 [2024-05-15 04:50:43.153064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:29.146 [2024-05-15 04:50:43.153144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.146 [2024-05-15 04:50:43.153212] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:13:29.146 [2024-05-15 04:50:43.153252] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.146 [2024-05-15 04:50:43.155202] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.146 [2024-05-15 04:50:43.155242] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:29.146 pt1 00:13:29.146 04:50:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:29.146 04:50:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:29.146 04:50:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:13:29.146 04:50:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:13:29.146 04:50:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:29.146 04:50:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:29.146 04:50:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:29.146 04:50:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:29.146 04:50:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:29.146 malloc2 00:13:29.146 04:50:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:29.404 [2024-05-15 04:50:43.477288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:29.404 [2024-05-15 04:50:43.477348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.404 [2024-05-15 04:50:43.477404] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:13:29.404 [2024-05-15 04:50:43.477438] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.404 [2024-05-15 04:50:43.478940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.404 [2024-05-15 04:50:43.478980] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:29.404 pt2 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:13:29.404 [2024-05-15 04:50:43.617397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:29.404 [2024-05-15 04:50:43.620764] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:29.404 [2024-05-15 04:50:43.621062] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002a380 00:13:29.404 [2024-05-15 04:50:43.621097] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:29.404 [2024-05-15 04:50:43.621416] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:29.404 [2024-05-15 04:50:43.622036] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002a380 00:13:29.404 [2024-05-15 04:50:43.622076] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002a380 00:13:29.404 [2024-05-15 04:50:43.622449] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.404 04:50:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.663 04:50:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:29.663 "name": "raid_bdev1", 00:13:29.663 "uuid": "0902b58f-ef35-4147-992c-c829139bee9d", 00:13:29.663 "strip_size_kb": 64, 00:13:29.663 "state": "online", 00:13:29.663 "raid_level": "concat", 00:13:29.663 "superblock": true, 00:13:29.663 "num_base_bdevs": 2, 00:13:29.663 "num_base_bdevs_discovered": 2, 00:13:29.663 "num_base_bdevs_operational": 2, 00:13:29.663 "base_bdevs_list": [ 00:13:29.663 { 00:13:29.663 "name": "pt1", 00:13:29.663 "uuid": "88acdc5b-7658-5ffe-9026-a373000fd101", 00:13:29.663 "is_configured": true, 00:13:29.663 "data_offset": 2048, 00:13:29.663 "data_size": 63488 00:13:29.663 }, 00:13:29.663 { 00:13:29.663 "name": "pt2", 00:13:29.663 "uuid": "0fb53fb7-6cf9-554c-8d31-40031c90d5c5", 00:13:29.663 "is_configured": true, 00:13:29.663 "data_offset": 2048, 00:13:29.663 "data_size": 63488 00:13:29.663 } 00:13:29.663 ] 00:13:29.663 }' 00:13:29.663 04:50:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:29.663 04:50:43 -- common/autotest_common.sh@10 -- # set +x 00:13:30.231 04:50:44 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:30.231 04:50:44 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:30.231 [2024-05-15 04:50:44.458419] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.490 04:50:44 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0902b58f-ef35-4147-992c-c829139bee9d 00:13:30.490 04:50:44 -- bdev/bdev_raid.sh@380 -- # '[' -z 0902b58f-ef35-4147-992c-c829139bee9d ']' 00:13:30.490 04:50:44 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:30.490 [2024-05-15 04:50:44.598368] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.490 [2024-05-15 04:50:44.598406] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.490 [2024-05-15 04:50:44.598496] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.490 [2024-05-15 04:50:44.598546] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.490 [2024-05-15 04:50:44.598560] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a380 name raid_bdev1, state offline 00:13:30.490 04:50:44 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:30.490 04:50:44 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.749 04:50:44 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:30.749 04:50:44 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:30.749 04:50:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:30.749 04:50:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:30.749 04:50:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:30.749 04:50:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:31.008 04:50:45 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:31.008 04:50:45 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:31.267 04:50:45 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:13:31.267 04:50:45 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:31.267 04:50:45 -- common/autotest_common.sh@640 -- # local es=0 00:13:31.267 04:50:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:31.267 04:50:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:31.267 04:50:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:31.267 04:50:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:31.267 04:50:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:31.267 04:50:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:31.267 04:50:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:31.267 04:50:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:31.267 04:50:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:31.267 04:50:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:31.267 [2024-05-15 04:50:45.470431] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:31.267 [2024-05-15 04:50:45.471915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:31.267 [2024-05-15 04:50:45.471965] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:13:31.267 [2024-05-15 04:50:45.472026] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:13:31.267 [2024-05-15 04:50:45.472053] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.267 [2024-05-15 04:50:45.472063] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a980 name raid_bdev1, state configuring 00:13:31.267 request: 00:13:31.267 { 00:13:31.267 "name": "raid_bdev1", 00:13:31.267 "raid_level": "concat", 00:13:31.267 "base_bdevs": [ 00:13:31.267 "malloc1", 00:13:31.267 "malloc2" 00:13:31.267 ], 00:13:31.267 "superblock": false, 00:13:31.267 "strip_size_kb": 64, 00:13:31.267 "method": "bdev_raid_create", 00:13:31.267 "req_id": 1 00:13:31.267 } 00:13:31.267 Got JSON-RPC error response 00:13:31.267 response: 00:13:31.267 { 00:13:31.267 "code": -17, 00:13:31.268 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:31.268 } 00:13:31.268 04:50:45 -- common/autotest_common.sh@643 -- # es=1 00:13:31.268 04:50:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:31.268 04:50:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:31.268 04:50:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:31.268 04:50:45 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:13:31.268 04:50:45 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.526 04:50:45 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:13:31.526 04:50:45 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:13:31.526 04:50:45 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:31.786 [2024-05-15 04:50:45.758494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:31.786 [2024-05-15 04:50:45.758577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.786 [2024-05-15 04:50:45.758617] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:13:31.786 [2024-05-15 04:50:45.758643] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.786 [2024-05-15 04:50:45.760292] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.786 [2024-05-15 04:50:45.760331] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:31.786 [2024-05-15 04:50:45.760419] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:31.786 [2024-05-15 04:50:45.760468] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:31.786 pt1 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:31.786 "name": "raid_bdev1", 00:13:31.786 "uuid": "0902b58f-ef35-4147-992c-c829139bee9d", 00:13:31.786 "strip_size_kb": 64, 00:13:31.786 "state": "configuring", 00:13:31.786 "raid_level": "concat", 00:13:31.786 "superblock": true, 00:13:31.786 "num_base_bdevs": 2, 00:13:31.786 "num_base_bdevs_discovered": 1, 00:13:31.786 "num_base_bdevs_operational": 2, 00:13:31.786 "base_bdevs_list": [ 00:13:31.786 { 00:13:31.786 "name": "pt1", 00:13:31.786 "uuid": "88acdc5b-7658-5ffe-9026-a373000fd101", 00:13:31.786 "is_configured": true, 00:13:31.786 "data_offset": 2048, 00:13:31.786 "data_size": 63488 00:13:31.786 }, 00:13:31.786 { 00:13:31.786 "name": null, 00:13:31.786 "uuid": "0fb53fb7-6cf9-554c-8d31-40031c90d5c5", 00:13:31.786 "is_configured": false, 00:13:31.786 "data_offset": 2048, 00:13:31.786 "data_size": 63488 00:13:31.786 } 00:13:31.786 ] 00:13:31.786 }' 00:13:31.786 04:50:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:31.786 04:50:45 -- common/autotest_common.sh@10 -- # set +x 00:13:32.353 04:50:46 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:13:32.353 04:50:46 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:32.353 04:50:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:32.353 04:50:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:32.612 [2024-05-15 04:50:46.678635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:32.612 [2024-05-15 04:50:46.678987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.612 [2024-05-15 04:50:46.679049] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d380 00:13:32.612 [2024-05-15 04:50:46.679074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.612 [2024-05-15 04:50:46.679403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.612 [2024-05-15 04:50:46.679430] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:32.612 [2024-05-15 04:50:46.679520] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:32.612 [2024-05-15 04:50:46.679540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:32.612 [2024-05-15 04:50:46.679614] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002cd80 00:13:32.612 [2024-05-15 04:50:46.679623] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:32.612 [2024-05-15 04:50:46.679727] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:13:32.612 [2024-05-15 04:50:46.679920] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002cd80 00:13:32.612 [2024-05-15 04:50:46.679930] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002cd80 00:13:32.612 [2024-05-15 04:50:46.680014] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.612 pt2 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.612 04:50:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.871 04:50:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:32.871 "name": "raid_bdev1", 00:13:32.871 "uuid": "0902b58f-ef35-4147-992c-c829139bee9d", 00:13:32.871 "strip_size_kb": 64, 00:13:32.871 "state": "online", 00:13:32.871 "raid_level": "concat", 00:13:32.871 "superblock": true, 00:13:32.871 "num_base_bdevs": 2, 00:13:32.871 "num_base_bdevs_discovered": 2, 00:13:32.871 "num_base_bdevs_operational": 2, 00:13:32.871 "base_bdevs_list": [ 00:13:32.871 { 00:13:32.871 "name": "pt1", 00:13:32.871 "uuid": "88acdc5b-7658-5ffe-9026-a373000fd101", 00:13:32.871 "is_configured": true, 00:13:32.871 "data_offset": 2048, 00:13:32.871 "data_size": 63488 00:13:32.871 }, 00:13:32.871 { 00:13:32.871 "name": "pt2", 00:13:32.871 "uuid": "0fb53fb7-6cf9-554c-8d31-40031c90d5c5", 00:13:32.871 "is_configured": true, 00:13:32.871 "data_offset": 2048, 00:13:32.871 "data_size": 63488 00:13:32.871 } 00:13:32.871 ] 00:13:32.871 }' 00:13:32.871 04:50:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:32.871 04:50:46 -- common/autotest_common.sh@10 -- # set +x 00:13:33.438 04:50:47 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:33.438 04:50:47 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:33.697 [2024-05-15 04:50:47.707121] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.697 04:50:47 -- bdev/bdev_raid.sh@430 -- # '[' 0902b58f-ef35-4147-992c-c829139bee9d '!=' 0902b58f-ef35-4147-992c-c829139bee9d ']' 00:13:33.697 04:50:47 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:13:33.697 04:50:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:33.697 04:50:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:33.697 04:50:47 -- bdev/bdev_raid.sh@511 -- # killprocess 48031 00:13:33.697 04:50:47 -- common/autotest_common.sh@926 -- # '[' -z 48031 ']' 00:13:33.697 04:50:47 -- common/autotest_common.sh@930 -- # kill -0 48031 00:13:33.697 04:50:47 -- common/autotest_common.sh@931 -- # uname 00:13:33.697 04:50:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:33.697 04:50:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 48031 00:13:33.697 killing process with pid 48031 00:13:33.697 04:50:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:33.697 04:50:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:33.697 04:50:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48031' 00:13:33.697 04:50:47 -- common/autotest_common.sh@945 -- # kill 48031 00:13:33.697 04:50:47 -- common/autotest_common.sh@950 -- # wait 48031 00:13:33.697 [2024-05-15 04:50:47.748888] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.697 [2024-05-15 04:50:47.748954] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.697 [2024-05-15 04:50:47.748991] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.697 [2024-05-15 04:50:47.749000] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002cd80 name raid_bdev1, state offline 00:13:33.956 [2024-05-15 04:50:47.944555] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:35.334 ************************************ 00:13:35.334 END TEST raid_superblock_test 00:13:35.334 ************************************ 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:35.334 00:13:35.334 real 0m8.212s 00:13:35.334 user 0m12.856s 00:13:35.334 sys 0m1.065s 00:13:35.334 04:50:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.334 04:50:49 -- common/autotest_common.sh@10 -- # set +x 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:13:35.334 04:50:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:35.334 04:50:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:35.334 04:50:49 -- common/autotest_common.sh@10 -- # set +x 00:13:35.334 ************************************ 00:13:35.334 START TEST raid_state_function_test 00:13:35.334 ************************************ 00:13:35.334 04:50:49 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:35.334 Process raid pid: 48286 00:13:35.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=48286 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48286' 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48286 /var/tmp/spdk-raid.sock 00:13:35.334 04:50:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:35.334 04:50:49 -- common/autotest_common.sh@819 -- # '[' -z 48286 ']' 00:13:35.334 04:50:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:35.334 04:50:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:35.334 04:50:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:35.334 04:50:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:35.334 04:50:49 -- common/autotest_common.sh@10 -- # set +x 00:13:35.593 [2024-05-15 04:50:49.594646] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:35.593 [2024-05-15 04:50:49.594881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.593 [2024-05-15 04:50:49.772204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.852 [2024-05-15 04:50:50.040256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.111 [2024-05-15 04:50:50.295528] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.047 04:50:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:37.047 04:50:51 -- common/autotest_common.sh@852 -- # return 0 00:13:37.047 04:50:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:37.307 [2024-05-15 04:50:51.283354] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:37.307 [2024-05-15 04:50:51.283424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:37.307 [2024-05-15 04:50:51.283436] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:37.307 [2024-05-15 04:50:51.283452] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:37.307 "name": "Existed_Raid", 00:13:37.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.307 "strip_size_kb": 0, 00:13:37.307 "state": "configuring", 00:13:37.307 "raid_level": "raid1", 00:13:37.307 "superblock": false, 00:13:37.307 "num_base_bdevs": 2, 00:13:37.307 "num_base_bdevs_discovered": 0, 00:13:37.307 "num_base_bdevs_operational": 2, 00:13:37.307 "base_bdevs_list": [ 00:13:37.307 { 00:13:37.307 "name": "BaseBdev1", 00:13:37.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.307 "is_configured": false, 00:13:37.307 "data_offset": 0, 00:13:37.307 "data_size": 0 00:13:37.307 }, 00:13:37.307 { 00:13:37.307 "name": "BaseBdev2", 00:13:37.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.307 "is_configured": false, 00:13:37.307 "data_offset": 0, 00:13:37.307 "data_size": 0 00:13:37.307 } 00:13:37.307 ] 00:13:37.307 }' 00:13:37.307 04:50:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:37.307 04:50:51 -- common/autotest_common.sh@10 -- # set +x 00:13:37.875 04:50:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:38.133 [2024-05-15 04:50:52.179434] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:38.133 [2024-05-15 04:50:52.179476] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:38.133 04:50:52 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:38.391 [2024-05-15 04:50:52.383449] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.391 [2024-05-15 04:50:52.383526] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.391 [2024-05-15 04:50:52.383536] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:38.391 [2024-05-15 04:50:52.383560] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:38.391 04:50:52 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:38.649 [2024-05-15 04:50:52.659975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.649 BaseBdev1 00:13:38.649 04:50:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:38.649 04:50:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:38.649 04:50:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:38.649 04:50:52 -- common/autotest_common.sh@889 -- # local i 00:13:38.649 04:50:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:38.649 04:50:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:38.649 04:50:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:38.907 04:50:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:38.907 [ 00:13:38.907 { 00:13:38.907 "name": "BaseBdev1", 00:13:38.907 "aliases": [ 00:13:38.907 "1e9867dd-ca82-426a-9233-fef497bcfeb4" 00:13:38.907 ], 00:13:38.907 "product_name": "Malloc disk", 00:13:38.907 "block_size": 512, 00:13:38.907 "num_blocks": 65536, 00:13:38.907 "uuid": "1e9867dd-ca82-426a-9233-fef497bcfeb4", 00:13:38.907 "assigned_rate_limits": { 00:13:38.907 "rw_ios_per_sec": 0, 00:13:38.907 "rw_mbytes_per_sec": 0, 00:13:38.907 "r_mbytes_per_sec": 0, 00:13:38.907 "w_mbytes_per_sec": 0 00:13:38.907 }, 00:13:38.907 "claimed": true, 00:13:38.907 "claim_type": "exclusive_write", 00:13:38.907 "zoned": false, 00:13:38.907 "supported_io_types": { 00:13:38.907 "read": true, 00:13:38.907 "write": true, 00:13:38.907 "unmap": true, 00:13:38.907 "write_zeroes": true, 00:13:38.907 "flush": true, 00:13:38.907 "reset": true, 00:13:38.907 "compare": false, 00:13:38.907 "compare_and_write": false, 00:13:38.907 "abort": true, 00:13:38.907 "nvme_admin": false, 00:13:38.907 "nvme_io": false 00:13:38.907 }, 00:13:38.907 "memory_domains": [ 00:13:38.907 { 00:13:38.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.907 "dma_device_type": 2 00:13:38.907 } 00:13:38.907 ], 00:13:38.907 "driver_specific": {} 00:13:38.907 } 00:13:38.907 ] 00:13:38.907 04:50:53 -- common/autotest_common.sh@895 -- # return 0 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.907 04:50:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.166 04:50:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:39.166 "name": "Existed_Raid", 00:13:39.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.166 "strip_size_kb": 0, 00:13:39.166 "state": "configuring", 00:13:39.166 "raid_level": "raid1", 00:13:39.166 "superblock": false, 00:13:39.166 "num_base_bdevs": 2, 00:13:39.166 "num_base_bdevs_discovered": 1, 00:13:39.166 "num_base_bdevs_operational": 2, 00:13:39.166 "base_bdevs_list": [ 00:13:39.166 { 00:13:39.166 "name": "BaseBdev1", 00:13:39.166 "uuid": "1e9867dd-ca82-426a-9233-fef497bcfeb4", 00:13:39.166 "is_configured": true, 00:13:39.166 "data_offset": 0, 00:13:39.166 "data_size": 65536 00:13:39.166 }, 00:13:39.166 { 00:13:39.166 "name": "BaseBdev2", 00:13:39.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.166 "is_configured": false, 00:13:39.166 "data_offset": 0, 00:13:39.166 "data_size": 0 00:13:39.166 } 00:13:39.166 ] 00:13:39.166 }' 00:13:39.166 04:50:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:39.166 04:50:53 -- common/autotest_common.sh@10 -- # set +x 00:13:39.734 04:50:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:39.734 [2024-05-15 04:50:53.964094] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.734 [2024-05-15 04:50:53.964140] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:13:39.993 04:50:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:39.993 04:50:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:39.993 [2024-05-15 04:50:54.112191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.993 [2024-05-15 04:50:54.113828] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.993 [2024-05-15 04:50:54.113884] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.993 04:50:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.252 04:50:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:40.252 "name": "Existed_Raid", 00:13:40.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.252 "strip_size_kb": 0, 00:13:40.252 "state": "configuring", 00:13:40.252 "raid_level": "raid1", 00:13:40.252 "superblock": false, 00:13:40.252 "num_base_bdevs": 2, 00:13:40.252 "num_base_bdevs_discovered": 1, 00:13:40.252 "num_base_bdevs_operational": 2, 00:13:40.252 "base_bdevs_list": [ 00:13:40.252 { 00:13:40.252 "name": "BaseBdev1", 00:13:40.252 "uuid": "1e9867dd-ca82-426a-9233-fef497bcfeb4", 00:13:40.252 "is_configured": true, 00:13:40.252 "data_offset": 0, 00:13:40.252 "data_size": 65536 00:13:40.252 }, 00:13:40.252 { 00:13:40.252 "name": "BaseBdev2", 00:13:40.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.252 "is_configured": false, 00:13:40.252 "data_offset": 0, 00:13:40.252 "data_size": 0 00:13:40.252 } 00:13:40.252 ] 00:13:40.252 }' 00:13:40.252 04:50:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:40.252 04:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:40.820 04:50:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:40.820 [2024-05-15 04:50:55.005275] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.820 [2024-05-15 04:50:55.005325] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027f80 00:13:40.820 [2024-05-15 04:50:55.005342] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:40.820 [2024-05-15 04:50:55.005432] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:40.820 [2024-05-15 04:50:55.005655] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027f80 00:13:40.820 [2024-05-15 04:50:55.005666] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027f80 00:13:40.820 BaseBdev2 00:13:40.820 [2024-05-15 04:50:55.006114] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.820 04:50:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:40.820 04:50:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:40.820 04:50:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:40.820 04:50:55 -- common/autotest_common.sh@889 -- # local i 00:13:40.820 04:50:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:40.820 04:50:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:40.820 04:50:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:41.079 04:50:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:41.079 [ 00:13:41.079 { 00:13:41.079 "name": "BaseBdev2", 00:13:41.079 "aliases": [ 00:13:41.079 "49590634-9201-4884-bede-2f2057dea72a" 00:13:41.079 ], 00:13:41.079 "product_name": "Malloc disk", 00:13:41.079 "block_size": 512, 00:13:41.079 "num_blocks": 65536, 00:13:41.079 "uuid": "49590634-9201-4884-bede-2f2057dea72a", 00:13:41.079 "assigned_rate_limits": { 00:13:41.079 "rw_ios_per_sec": 0, 00:13:41.079 "rw_mbytes_per_sec": 0, 00:13:41.079 "r_mbytes_per_sec": 0, 00:13:41.079 "w_mbytes_per_sec": 0 00:13:41.079 }, 00:13:41.079 "claimed": true, 00:13:41.079 "claim_type": "exclusive_write", 00:13:41.079 "zoned": false, 00:13:41.079 "supported_io_types": { 00:13:41.079 "read": true, 00:13:41.079 "write": true, 00:13:41.079 "unmap": true, 00:13:41.079 "write_zeroes": true, 00:13:41.079 "flush": true, 00:13:41.079 "reset": true, 00:13:41.079 "compare": false, 00:13:41.079 "compare_and_write": false, 00:13:41.079 "abort": true, 00:13:41.079 "nvme_admin": false, 00:13:41.079 "nvme_io": false 00:13:41.079 }, 00:13:41.079 "memory_domains": [ 00:13:41.079 { 00:13:41.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.079 "dma_device_type": 2 00:13:41.079 } 00:13:41.079 ], 00:13:41.079 "driver_specific": {} 00:13:41.079 } 00:13:41.079 ] 00:13:41.079 04:50:55 -- common/autotest_common.sh@895 -- # return 0 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.079 04:50:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.338 04:50:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:41.338 "name": "Existed_Raid", 00:13:41.338 "uuid": "c959cb61-de27-4693-a99c-b833a1831f9d", 00:13:41.338 "strip_size_kb": 0, 00:13:41.338 "state": "online", 00:13:41.338 "raid_level": "raid1", 00:13:41.338 "superblock": false, 00:13:41.338 "num_base_bdevs": 2, 00:13:41.338 "num_base_bdevs_discovered": 2, 00:13:41.338 "num_base_bdevs_operational": 2, 00:13:41.338 "base_bdevs_list": [ 00:13:41.338 { 00:13:41.338 "name": "BaseBdev1", 00:13:41.338 "uuid": "1e9867dd-ca82-426a-9233-fef497bcfeb4", 00:13:41.338 "is_configured": true, 00:13:41.338 "data_offset": 0, 00:13:41.338 "data_size": 65536 00:13:41.338 }, 00:13:41.338 { 00:13:41.338 "name": "BaseBdev2", 00:13:41.338 "uuid": "49590634-9201-4884-bede-2f2057dea72a", 00:13:41.338 "is_configured": true, 00:13:41.338 "data_offset": 0, 00:13:41.338 "data_size": 65536 00:13:41.338 } 00:13:41.338 ] 00:13:41.338 }' 00:13:41.338 04:50:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:41.338 04:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:41.907 04:50:55 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:41.907 [2024-05-15 04:50:56.125446] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@196 -- # return 0 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:42.167 "name": "Existed_Raid", 00:13:42.167 "uuid": "c959cb61-de27-4693-a99c-b833a1831f9d", 00:13:42.167 "strip_size_kb": 0, 00:13:42.167 "state": "online", 00:13:42.167 "raid_level": "raid1", 00:13:42.167 "superblock": false, 00:13:42.167 "num_base_bdevs": 2, 00:13:42.167 "num_base_bdevs_discovered": 1, 00:13:42.167 "num_base_bdevs_operational": 1, 00:13:42.167 "base_bdevs_list": [ 00:13:42.167 { 00:13:42.167 "name": null, 00:13:42.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.167 "is_configured": false, 00:13:42.167 "data_offset": 0, 00:13:42.167 "data_size": 65536 00:13:42.167 }, 00:13:42.167 { 00:13:42.167 "name": "BaseBdev2", 00:13:42.167 "uuid": "49590634-9201-4884-bede-2f2057dea72a", 00:13:42.167 "is_configured": true, 00:13:42.167 "data_offset": 0, 00:13:42.167 "data_size": 65536 00:13:42.167 } 00:13:42.167 ] 00:13:42.167 }' 00:13:42.167 04:50:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:42.167 04:50:56 -- common/autotest_common.sh@10 -- # set +x 00:13:42.735 04:50:56 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:42.735 04:50:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:42.735 04:50:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:42.735 04:50:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.993 04:50:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:42.993 04:50:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.993 04:50:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:43.252 [2024-05-15 04:50:57.237340] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.252 [2024-05-15 04:50:57.237369] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.252 [2024-05-15 04:50:57.237421] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.252 [2024-05-15 04:50:57.338628] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.252 [2024-05-15 04:50:57.338666] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027f80 name Existed_Raid, state offline 00:13:43.252 04:50:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:43.252 04:50:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:43.252 04:50:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:43.252 04:50:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.511 04:50:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:43.511 04:50:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:43.511 04:50:57 -- bdev/bdev_raid.sh@287 -- # killprocess 48286 00:13:43.511 04:50:57 -- common/autotest_common.sh@926 -- # '[' -z 48286 ']' 00:13:43.511 04:50:57 -- common/autotest_common.sh@930 -- # kill -0 48286 00:13:43.511 04:50:57 -- common/autotest_common.sh@931 -- # uname 00:13:43.511 04:50:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:43.511 04:50:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 48286 00:13:43.511 killing process with pid 48286 00:13:43.511 04:50:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:43.511 04:50:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:43.511 04:50:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48286' 00:13:43.511 04:50:57 -- common/autotest_common.sh@945 -- # kill 48286 00:13:43.511 04:50:57 -- common/autotest_common.sh@950 -- # wait 48286 00:13:43.511 [2024-05-15 04:50:57.622343] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.511 [2024-05-15 04:50:57.622463] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.886 ************************************ 00:13:44.886 END TEST raid_state_function_test 00:13:44.886 ************************************ 00:13:44.886 04:50:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:44.886 00:13:44.886 real 0m9.601s 00:13:44.886 user 0m15.410s 00:13:44.886 sys 0m1.287s 00:13:44.886 04:50:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.886 04:50:59 -- common/autotest_common.sh@10 -- # set +x 00:13:44.886 04:50:59 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:44.886 04:50:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:44.887 04:50:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:44.887 04:50:59 -- common/autotest_common.sh@10 -- # set +x 00:13:44.887 ************************************ 00:13:44.887 START TEST raid_state_function_test_sb 00:13:44.887 ************************************ 00:13:44.887 04:50:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:44.887 Process raid pid: 48598 00:13:44.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=48598 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48598' 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:44.887 04:50:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48598 /var/tmp/spdk-raid.sock 00:13:44.887 04:50:59 -- common/autotest_common.sh@819 -- # '[' -z 48598 ']' 00:13:44.887 04:50:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:44.887 04:50:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:44.887 04:50:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:44.887 04:50:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:44.887 04:50:59 -- common/autotest_common.sh@10 -- # set +x 00:13:45.145 [2024-05-15 04:50:59.254916] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:45.145 [2024-05-15 04:50:59.255145] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.403 [2024-05-15 04:50:59.431369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.667 [2024-05-15 04:50:59.725270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.929 [2024-05-15 04:50:59.989619] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.494 04:51:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:46.494 04:51:00 -- common/autotest_common.sh@852 -- # return 0 00:13:46.494 04:51:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:46.752 [2024-05-15 04:51:00.841915] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.752 [2024-05-15 04:51:00.841990] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.752 [2024-05-15 04:51:00.842001] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.752 [2024-05-15 04:51:00.842019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.752 04:51:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.010 04:51:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:47.010 "name": "Existed_Raid", 00:13:47.010 "uuid": "30b1d249-862c-46cd-8fe3-4b4ae63512a9", 00:13:47.010 "strip_size_kb": 0, 00:13:47.010 "state": "configuring", 00:13:47.010 "raid_level": "raid1", 00:13:47.010 "superblock": true, 00:13:47.010 "num_base_bdevs": 2, 00:13:47.010 "num_base_bdevs_discovered": 0, 00:13:47.010 "num_base_bdevs_operational": 2, 00:13:47.010 "base_bdevs_list": [ 00:13:47.010 { 00:13:47.010 "name": "BaseBdev1", 00:13:47.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.010 "is_configured": false, 00:13:47.010 "data_offset": 0, 00:13:47.010 "data_size": 0 00:13:47.010 }, 00:13:47.010 { 00:13:47.010 "name": "BaseBdev2", 00:13:47.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.010 "is_configured": false, 00:13:47.010 "data_offset": 0, 00:13:47.010 "data_size": 0 00:13:47.011 } 00:13:47.011 ] 00:13:47.011 }' 00:13:47.011 04:51:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:47.011 04:51:01 -- common/autotest_common.sh@10 -- # set +x 00:13:47.268 04:51:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:47.526 [2024-05-15 04:51:01.669860] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:47.526 [2024-05-15 04:51:01.669896] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:47.526 04:51:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:47.784 [2024-05-15 04:51:01.805979] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:47.785 [2024-05-15 04:51:01.806054] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:47.785 [2024-05-15 04:51:01.806065] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:47.785 [2024-05-15 04:51:01.806090] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:47.785 04:51:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:48.043 [2024-05-15 04:51:02.065548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.043 BaseBdev1 00:13:48.043 04:51:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:48.043 04:51:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:48.043 04:51:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:48.043 04:51:02 -- common/autotest_common.sh@889 -- # local i 00:13:48.043 04:51:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:48.043 04:51:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:48.043 04:51:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:48.043 04:51:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.301 [ 00:13:48.301 { 00:13:48.301 "name": "BaseBdev1", 00:13:48.301 "aliases": [ 00:13:48.301 "8c2ef156-7b84-4515-a328-78ea1cd330a5" 00:13:48.301 ], 00:13:48.301 "product_name": "Malloc disk", 00:13:48.301 "block_size": 512, 00:13:48.301 "num_blocks": 65536, 00:13:48.301 "uuid": "8c2ef156-7b84-4515-a328-78ea1cd330a5", 00:13:48.301 "assigned_rate_limits": { 00:13:48.301 "rw_ios_per_sec": 0, 00:13:48.301 "rw_mbytes_per_sec": 0, 00:13:48.301 "r_mbytes_per_sec": 0, 00:13:48.301 "w_mbytes_per_sec": 0 00:13:48.301 }, 00:13:48.301 "claimed": true, 00:13:48.301 "claim_type": "exclusive_write", 00:13:48.301 "zoned": false, 00:13:48.301 "supported_io_types": { 00:13:48.301 "read": true, 00:13:48.301 "write": true, 00:13:48.301 "unmap": true, 00:13:48.301 "write_zeroes": true, 00:13:48.301 "flush": true, 00:13:48.301 "reset": true, 00:13:48.301 "compare": false, 00:13:48.301 "compare_and_write": false, 00:13:48.301 "abort": true, 00:13:48.301 "nvme_admin": false, 00:13:48.301 "nvme_io": false 00:13:48.301 }, 00:13:48.301 "memory_domains": [ 00:13:48.301 { 00:13:48.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.301 "dma_device_type": 2 00:13:48.301 } 00:13:48.301 ], 00:13:48.301 "driver_specific": {} 00:13:48.301 } 00:13:48.301 ] 00:13:48.301 04:51:02 -- common/autotest_common.sh@895 -- # return 0 00:13:48.301 04:51:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:48.301 04:51:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:48.301 04:51:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:48.301 04:51:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:48.301 04:51:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:48.301 04:51:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:48.301 04:51:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.301 04:51:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.302 04:51:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.302 04:51:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.302 04:51:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.302 04:51:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.561 04:51:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:48.561 "name": "Existed_Raid", 00:13:48.561 "uuid": "bc528245-69ac-4dce-bbe6-597d853e50f0", 00:13:48.561 "strip_size_kb": 0, 00:13:48.561 "state": "configuring", 00:13:48.561 "raid_level": "raid1", 00:13:48.561 "superblock": true, 00:13:48.561 "num_base_bdevs": 2, 00:13:48.561 "num_base_bdevs_discovered": 1, 00:13:48.561 "num_base_bdevs_operational": 2, 00:13:48.561 "base_bdevs_list": [ 00:13:48.561 { 00:13:48.561 "name": "BaseBdev1", 00:13:48.561 "uuid": "8c2ef156-7b84-4515-a328-78ea1cd330a5", 00:13:48.561 "is_configured": true, 00:13:48.561 "data_offset": 2048, 00:13:48.561 "data_size": 63488 00:13:48.561 }, 00:13:48.561 { 00:13:48.561 "name": "BaseBdev2", 00:13:48.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.561 "is_configured": false, 00:13:48.561 "data_offset": 0, 00:13:48.561 "data_size": 0 00:13:48.561 } 00:13:48.561 ] 00:13:48.561 }' 00:13:48.561 04:51:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:48.561 04:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:49.129 04:51:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:49.387 [2024-05-15 04:51:03.393681] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.387 [2024-05-15 04:51:03.393886] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:13:49.387 04:51:03 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:49.387 04:51:03 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:49.670 04:51:03 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:49.670 BaseBdev1 00:13:49.670 04:51:03 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:49.670 04:51:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:49.670 04:51:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:49.670 04:51:03 -- common/autotest_common.sh@889 -- # local i 00:13:49.670 04:51:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:49.670 04:51:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:49.670 04:51:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:49.929 04:51:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:50.187 [ 00:13:50.187 { 00:13:50.187 "name": "BaseBdev1", 00:13:50.187 "aliases": [ 00:13:50.187 "d96f4544-6bd5-4f9b-9c5e-cb28a563d284" 00:13:50.187 ], 00:13:50.187 "product_name": "Malloc disk", 00:13:50.187 "block_size": 512, 00:13:50.187 "num_blocks": 65536, 00:13:50.187 "uuid": "d96f4544-6bd5-4f9b-9c5e-cb28a563d284", 00:13:50.187 "assigned_rate_limits": { 00:13:50.187 "rw_ios_per_sec": 0, 00:13:50.187 "rw_mbytes_per_sec": 0, 00:13:50.187 "r_mbytes_per_sec": 0, 00:13:50.187 "w_mbytes_per_sec": 0 00:13:50.187 }, 00:13:50.187 "claimed": false, 00:13:50.187 "zoned": false, 00:13:50.187 "supported_io_types": { 00:13:50.187 "read": true, 00:13:50.187 "write": true, 00:13:50.187 "unmap": true, 00:13:50.187 "write_zeroes": true, 00:13:50.187 "flush": true, 00:13:50.187 "reset": true, 00:13:50.187 "compare": false, 00:13:50.187 "compare_and_write": false, 00:13:50.187 "abort": true, 00:13:50.187 "nvme_admin": false, 00:13:50.187 "nvme_io": false 00:13:50.187 }, 00:13:50.187 "memory_domains": [ 00:13:50.187 { 00:13:50.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.187 "dma_device_type": 2 00:13:50.187 } 00:13:50.187 ], 00:13:50.187 "driver_specific": {} 00:13:50.187 } 00:13:50.187 ] 00:13:50.187 04:51:04 -- common/autotest_common.sh@895 -- # return 0 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:50.187 [2024-05-15 04:51:04.332016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.187 [2024-05-15 04:51:04.333410] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.187 [2024-05-15 04:51:04.333465] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.187 04:51:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.445 04:51:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:50.445 "name": "Existed_Raid", 00:13:50.445 "uuid": "c31d1943-0fec-4691-a7d0-74884113d932", 00:13:50.445 "strip_size_kb": 0, 00:13:50.445 "state": "configuring", 00:13:50.445 "raid_level": "raid1", 00:13:50.445 "superblock": true, 00:13:50.445 "num_base_bdevs": 2, 00:13:50.445 "num_base_bdevs_discovered": 1, 00:13:50.445 "num_base_bdevs_operational": 2, 00:13:50.445 "base_bdevs_list": [ 00:13:50.445 { 00:13:50.445 "name": "BaseBdev1", 00:13:50.445 "uuid": "d96f4544-6bd5-4f9b-9c5e-cb28a563d284", 00:13:50.445 "is_configured": true, 00:13:50.445 "data_offset": 2048, 00:13:50.445 "data_size": 63488 00:13:50.445 }, 00:13:50.445 { 00:13:50.445 "name": "BaseBdev2", 00:13:50.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.445 "is_configured": false, 00:13:50.445 "data_offset": 0, 00:13:50.445 "data_size": 0 00:13:50.445 } 00:13:50.445 ] 00:13:50.445 }' 00:13:50.445 04:51:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:50.445 04:51:04 -- common/autotest_common.sh@10 -- # set +x 00:13:51.012 04:51:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:51.271 [2024-05-15 04:51:05.293190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.271 [2024-05-15 04:51:05.293344] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:13:51.271 [2024-05-15 04:51:05.293356] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:51.271 [2024-05-15 04:51:05.293438] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:51.271 [2024-05-15 04:51:05.293630] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:13:51.271 [2024-05-15 04:51:05.293640] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:13:51.271 BaseBdev2 00:13:51.271 [2024-05-15 04:51:05.293931] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.271 04:51:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:51.271 04:51:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:51.271 04:51:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:51.271 04:51:05 -- common/autotest_common.sh@889 -- # local i 00:13:51.271 04:51:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:51.271 04:51:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:51.271 04:51:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.529 04:51:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.529 [ 00:13:51.529 { 00:13:51.529 "name": "BaseBdev2", 00:13:51.529 "aliases": [ 00:13:51.529 "8d36b370-8b15-4d94-95c5-19e94b9cb67e" 00:13:51.529 ], 00:13:51.529 "product_name": "Malloc disk", 00:13:51.529 "block_size": 512, 00:13:51.529 "num_blocks": 65536, 00:13:51.529 "uuid": "8d36b370-8b15-4d94-95c5-19e94b9cb67e", 00:13:51.529 "assigned_rate_limits": { 00:13:51.529 "rw_ios_per_sec": 0, 00:13:51.529 "rw_mbytes_per_sec": 0, 00:13:51.529 "r_mbytes_per_sec": 0, 00:13:51.529 "w_mbytes_per_sec": 0 00:13:51.529 }, 00:13:51.529 "claimed": true, 00:13:51.529 "claim_type": "exclusive_write", 00:13:51.529 "zoned": false, 00:13:51.529 "supported_io_types": { 00:13:51.529 "read": true, 00:13:51.529 "write": true, 00:13:51.529 "unmap": true, 00:13:51.529 "write_zeroes": true, 00:13:51.529 "flush": true, 00:13:51.529 "reset": true, 00:13:51.529 "compare": false, 00:13:51.529 "compare_and_write": false, 00:13:51.529 "abort": true, 00:13:51.529 "nvme_admin": false, 00:13:51.529 "nvme_io": false 00:13:51.529 }, 00:13:51.529 "memory_domains": [ 00:13:51.529 { 00:13:51.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.529 "dma_device_type": 2 00:13:51.529 } 00:13:51.529 ], 00:13:51.529 "driver_specific": {} 00:13:51.529 } 00:13:51.529 ] 00:13:51.529 04:51:05 -- common/autotest_common.sh@895 -- # return 0 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.529 04:51:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.788 04:51:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:51.788 "name": "Existed_Raid", 00:13:51.788 "uuid": "c31d1943-0fec-4691-a7d0-74884113d932", 00:13:51.788 "strip_size_kb": 0, 00:13:51.788 "state": "online", 00:13:51.788 "raid_level": "raid1", 00:13:51.788 "superblock": true, 00:13:51.788 "num_base_bdevs": 2, 00:13:51.788 "num_base_bdevs_discovered": 2, 00:13:51.788 "num_base_bdevs_operational": 2, 00:13:51.788 "base_bdevs_list": [ 00:13:51.788 { 00:13:51.788 "name": "BaseBdev1", 00:13:51.788 "uuid": "d96f4544-6bd5-4f9b-9c5e-cb28a563d284", 00:13:51.788 "is_configured": true, 00:13:51.788 "data_offset": 2048, 00:13:51.788 "data_size": 63488 00:13:51.788 }, 00:13:51.788 { 00:13:51.788 "name": "BaseBdev2", 00:13:51.788 "uuid": "8d36b370-8b15-4d94-95c5-19e94b9cb67e", 00:13:51.788 "is_configured": true, 00:13:51.788 "data_offset": 2048, 00:13:51.788 "data_size": 63488 00:13:51.788 } 00:13:51.788 ] 00:13:51.788 }' 00:13:51.788 04:51:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:51.788 04:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:52.355 04:51:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:52.613 [2024-05-15 04:51:06.741417] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@196 -- # return 0 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.871 04:51:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.871 04:51:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:52.871 "name": "Existed_Raid", 00:13:52.871 "uuid": "c31d1943-0fec-4691-a7d0-74884113d932", 00:13:52.871 "strip_size_kb": 0, 00:13:52.871 "state": "online", 00:13:52.871 "raid_level": "raid1", 00:13:52.871 "superblock": true, 00:13:52.871 "num_base_bdevs": 2, 00:13:52.871 "num_base_bdevs_discovered": 1, 00:13:52.871 "num_base_bdevs_operational": 1, 00:13:52.871 "base_bdevs_list": [ 00:13:52.871 { 00:13:52.871 "name": null, 00:13:52.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.871 "is_configured": false, 00:13:52.871 "data_offset": 2048, 00:13:52.871 "data_size": 63488 00:13:52.871 }, 00:13:52.871 { 00:13:52.871 "name": "BaseBdev2", 00:13:52.871 "uuid": "8d36b370-8b15-4d94-95c5-19e94b9cb67e", 00:13:52.871 "is_configured": true, 00:13:52.871 "data_offset": 2048, 00:13:52.871 "data_size": 63488 00:13:52.871 } 00:13:52.871 ] 00:13:52.871 }' 00:13:52.871 04:51:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:52.871 04:51:07 -- common/autotest_common.sh@10 -- # set +x 00:13:53.439 04:51:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:53.439 04:51:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:53.439 04:51:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.439 04:51:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:53.697 04:51:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:53.697 04:51:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:53.697 04:51:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:53.956 [2024-05-15 04:51:08.083600] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.956 [2024-05-15 04:51:08.083630] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.956 [2024-05-15 04:51:08.083673] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.956 [2024-05-15 04:51:08.181701] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.956 [2024-05-15 04:51:08.181744] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:13:54.214 04:51:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:54.214 04:51:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:54.214 04:51:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:54.214 04:51:08 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.215 04:51:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:54.215 04:51:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:54.215 04:51:08 -- bdev/bdev_raid.sh@287 -- # killprocess 48598 00:13:54.215 04:51:08 -- common/autotest_common.sh@926 -- # '[' -z 48598 ']' 00:13:54.215 04:51:08 -- common/autotest_common.sh@930 -- # kill -0 48598 00:13:54.215 04:51:08 -- common/autotest_common.sh@931 -- # uname 00:13:54.215 04:51:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:54.215 04:51:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 48598 00:13:54.215 killing process with pid 48598 00:13:54.215 04:51:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:54.215 04:51:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:54.215 04:51:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48598' 00:13:54.215 04:51:08 -- common/autotest_common.sh@945 -- # kill 48598 00:13:54.215 04:51:08 -- common/autotest_common.sh@950 -- # wait 48598 00:13:54.215 [2024-05-15 04:51:08.432674] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.215 [2024-05-15 04:51:08.432785] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:56.120 00:13:56.120 real 0m10.763s 00:13:56.120 user 0m17.549s 00:13:56.120 sys 0m1.429s 00:13:56.120 04:51:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.120 ************************************ 00:13:56.120 END TEST raid_state_function_test_sb 00:13:56.120 ************************************ 00:13:56.120 04:51:09 -- common/autotest_common.sh@10 -- # set +x 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:13:56.120 04:51:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:56.120 04:51:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:56.120 04:51:09 -- common/autotest_common.sh@10 -- # set +x 00:13:56.120 ************************************ 00:13:56.120 START TEST raid_superblock_test 00:13:56.120 ************************************ 00:13:56.120 04:51:09 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:13:56.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@357 -- # raid_pid=48939 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@358 -- # waitforlisten 48939 /var/tmp/spdk-raid.sock 00:13:56.120 04:51:09 -- common/autotest_common.sh@819 -- # '[' -z 48939 ']' 00:13:56.120 04:51:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:56.120 04:51:09 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:56.120 04:51:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:56.120 04:51:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:56.120 04:51:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:56.120 04:51:09 -- common/autotest_common.sh@10 -- # set +x 00:13:56.120 [2024-05-15 04:51:10.094246] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:56.120 [2024-05-15 04:51:10.094484] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48939 ] 00:13:56.120 [2024-05-15 04:51:10.283498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.379 [2024-05-15 04:51:10.559176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.638 [2024-05-15 04:51:10.824114] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.574 04:51:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:57.574 04:51:11 -- common/autotest_common.sh@852 -- # return 0 00:13:57.574 04:51:11 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:13:57.574 04:51:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:57.574 04:51:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:13:57.574 04:51:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:13:57.574 04:51:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:57.574 04:51:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:57.574 04:51:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:57.574 04:51:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:57.574 04:51:11 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:57.833 malloc1 00:13:57.833 04:51:11 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:57.833 [2024-05-15 04:51:11.961963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:57.833 [2024-05-15 04:51:11.962034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.833 [2024-05-15 04:51:11.962129] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:13:57.833 [2024-05-15 04:51:11.962162] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.833 [2024-05-15 04:51:11.963678] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.833 [2024-05-15 04:51:11.963727] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:57.833 pt1 00:13:57.833 04:51:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:57.833 04:51:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:57.833 04:51:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:13:57.833 04:51:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:13:57.833 04:51:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:57.833 04:51:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:57.833 04:51:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:57.833 04:51:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:57.834 04:51:11 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:58.093 malloc2 00:13:58.093 04:51:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:58.352 [2024-05-15 04:51:12.367039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:58.352 [2024-05-15 04:51:12.367115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.352 [2024-05-15 04:51:12.367176] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:13:58.352 [2024-05-15 04:51:12.367217] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.352 [2024-05-15 04:51:12.369006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.352 [2024-05-15 04:51:12.369043] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:58.352 pt2 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:58.352 [2024-05-15 04:51:12.515125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:58.352 [2024-05-15 04:51:12.516418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:58.352 [2024-05-15 04:51:12.516531] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002a380 00:13:58.352 [2024-05-15 04:51:12.516541] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:58.352 [2024-05-15 04:51:12.516647] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:58.352 [2024-05-15 04:51:12.516907] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002a380 00:13:58.352 [2024-05-15 04:51:12.516918] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002a380 00:13:58.352 [2024-05-15 04:51:12.517016] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.352 04:51:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.611 04:51:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:58.611 "name": "raid_bdev1", 00:13:58.611 "uuid": "96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d", 00:13:58.611 "strip_size_kb": 0, 00:13:58.611 "state": "online", 00:13:58.611 "raid_level": "raid1", 00:13:58.611 "superblock": true, 00:13:58.611 "num_base_bdevs": 2, 00:13:58.611 "num_base_bdevs_discovered": 2, 00:13:58.611 "num_base_bdevs_operational": 2, 00:13:58.611 "base_bdevs_list": [ 00:13:58.611 { 00:13:58.611 "name": "pt1", 00:13:58.611 "uuid": "d323812e-0784-5451-a6a9-e9bb751749a8", 00:13:58.611 "is_configured": true, 00:13:58.611 "data_offset": 2048, 00:13:58.611 "data_size": 63488 00:13:58.611 }, 00:13:58.611 { 00:13:58.611 "name": "pt2", 00:13:58.611 "uuid": "fd265214-6025-52e6-852f-9777d5c7aa22", 00:13:58.611 "is_configured": true, 00:13:58.611 "data_offset": 2048, 00:13:58.611 "data_size": 63488 00:13:58.611 } 00:13:58.611 ] 00:13:58.611 }' 00:13:58.611 04:51:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:58.611 04:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:59.177 04:51:13 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:59.177 04:51:13 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:59.435 [2024-05-15 04:51:13.423263] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.435 04:51:13 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d 00:13:59.435 04:51:13 -- bdev/bdev_raid.sh@380 -- # '[' -z 96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d ']' 00:13:59.435 04:51:13 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:59.435 [2024-05-15 04:51:13.643173] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.435 [2024-05-15 04:51:13.643197] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.435 [2024-05-15 04:51:13.643257] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.435 [2024-05-15 04:51:13.643296] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.435 [2024-05-15 04:51:13.643304] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a380 name raid_bdev1, state offline 00:13:59.435 04:51:13 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:59.435 04:51:13 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.693 04:51:13 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:59.693 04:51:13 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:59.693 04:51:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.693 04:51:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:59.952 04:51:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.952 04:51:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:59.952 04:51:14 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:59.952 04:51:14 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:00.210 04:51:14 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:00.210 04:51:14 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:00.210 04:51:14 -- common/autotest_common.sh@640 -- # local es=0 00:14:00.210 04:51:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:00.210 04:51:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.210 04:51:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.210 04:51:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.210 04:51:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.210 04:51:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.210 04:51:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.210 04:51:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.210 04:51:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:00.210 04:51:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:00.469 [2024-05-15 04:51:14.507241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:00.469 [2024-05-15 04:51:14.508793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:00.469 [2024-05-15 04:51:14.508840] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:00.469 [2024-05-15 04:51:14.508907] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:00.469 [2024-05-15 04:51:14.508937] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.469 [2024-05-15 04:51:14.508948] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a980 name raid_bdev1, state configuring 00:14:00.469 request: 00:14:00.469 { 00:14:00.469 "name": "raid_bdev1", 00:14:00.469 "raid_level": "raid1", 00:14:00.469 "base_bdevs": [ 00:14:00.469 "malloc1", 00:14:00.469 "malloc2" 00:14:00.469 ], 00:14:00.469 "superblock": false, 00:14:00.469 "method": "bdev_raid_create", 00:14:00.469 "req_id": 1 00:14:00.469 } 00:14:00.469 Got JSON-RPC error response 00:14:00.469 response: 00:14:00.469 { 00:14:00.469 "code": -17, 00:14:00.469 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:00.469 } 00:14:00.469 04:51:14 -- common/autotest_common.sh@643 -- # es=1 00:14:00.469 04:51:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:00.469 04:51:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:00.469 04:51:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:00.469 04:51:14 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.469 04:51:14 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:00.728 [2024-05-15 04:51:14.867260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:00.728 [2024-05-15 04:51:14.867344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.728 [2024-05-15 04:51:14.867382] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:14:00.728 [2024-05-15 04:51:14.867407] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.728 [2024-05-15 04:51:14.868924] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.728 [2024-05-15 04:51:14.868974] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:00.728 [2024-05-15 04:51:14.869068] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:00.728 [2024-05-15 04:51:14.869125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:00.728 pt1 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.728 04:51:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.988 04:51:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:00.988 "name": "raid_bdev1", 00:14:00.988 "uuid": "96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d", 00:14:00.988 "strip_size_kb": 0, 00:14:00.988 "state": "configuring", 00:14:00.988 "raid_level": "raid1", 00:14:00.988 "superblock": true, 00:14:00.988 "num_base_bdevs": 2, 00:14:00.988 "num_base_bdevs_discovered": 1, 00:14:00.988 "num_base_bdevs_operational": 2, 00:14:00.988 "base_bdevs_list": [ 00:14:00.988 { 00:14:00.988 "name": "pt1", 00:14:00.988 "uuid": "d323812e-0784-5451-a6a9-e9bb751749a8", 00:14:00.988 "is_configured": true, 00:14:00.988 "data_offset": 2048, 00:14:00.988 "data_size": 63488 00:14:00.988 }, 00:14:00.988 { 00:14:00.988 "name": null, 00:14:00.988 "uuid": "fd265214-6025-52e6-852f-9777d5c7aa22", 00:14:00.988 "is_configured": false, 00:14:00.988 "data_offset": 2048, 00:14:00.988 "data_size": 63488 00:14:00.988 } 00:14:00.988 ] 00:14:00.988 }' 00:14:00.988 04:51:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:00.988 04:51:15 -- common/autotest_common.sh@10 -- # set +x 00:14:01.557 04:51:15 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:01.557 04:51:15 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:01.557 04:51:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:01.557 04:51:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:01.557 [2024-05-15 04:51:15.627331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:01.557 [2024-05-15 04:51:15.627423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.557 [2024-05-15 04:51:15.627466] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d380 00:14:01.557 [2024-05-15 04:51:15.627496] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.557 [2024-05-15 04:51:15.627975] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.557 [2024-05-15 04:51:15.628015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:01.557 [2024-05-15 04:51:15.628102] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:01.557 [2024-05-15 04:51:15.628125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:01.557 [2024-05-15 04:51:15.628195] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002cd80 00:14:01.557 [2024-05-15 04:51:15.628204] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:01.557 [2024-05-15 04:51:15.628303] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:01.557 [2024-05-15 04:51:15.628470] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002cd80 00:14:01.557 [2024-05-15 04:51:15.628480] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002cd80 00:14:01.557 [2024-05-15 04:51:15.628571] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.557 pt2 00:14:01.557 04:51:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.558 04:51:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.817 04:51:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:01.817 "name": "raid_bdev1", 00:14:01.817 "uuid": "96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d", 00:14:01.817 "strip_size_kb": 0, 00:14:01.817 "state": "online", 00:14:01.817 "raid_level": "raid1", 00:14:01.817 "superblock": true, 00:14:01.817 "num_base_bdevs": 2, 00:14:01.817 "num_base_bdevs_discovered": 2, 00:14:01.817 "num_base_bdevs_operational": 2, 00:14:01.817 "base_bdevs_list": [ 00:14:01.817 { 00:14:01.817 "name": "pt1", 00:14:01.817 "uuid": "d323812e-0784-5451-a6a9-e9bb751749a8", 00:14:01.817 "is_configured": true, 00:14:01.817 "data_offset": 2048, 00:14:01.817 "data_size": 63488 00:14:01.817 }, 00:14:01.817 { 00:14:01.817 "name": "pt2", 00:14:01.817 "uuid": "fd265214-6025-52e6-852f-9777d5c7aa22", 00:14:01.817 "is_configured": true, 00:14:01.817 "data_offset": 2048, 00:14:01.817 "data_size": 63488 00:14:01.817 } 00:14:01.817 ] 00:14:01.817 }' 00:14:01.817 04:51:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:01.817 04:51:15 -- common/autotest_common.sh@10 -- # set +x 00:14:02.386 04:51:16 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:02.386 04:51:16 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:02.386 [2024-05-15 04:51:16.563530] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.386 04:51:16 -- bdev/bdev_raid.sh@430 -- # '[' 96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d '!=' 96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d ']' 00:14:02.386 04:51:16 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:14:02.386 04:51:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:02.386 04:51:16 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:02.386 04:51:16 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:02.645 [2024-05-15 04:51:16.775532] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:02.645 04:51:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.646 04:51:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.905 04:51:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:02.905 "name": "raid_bdev1", 00:14:02.905 "uuid": "96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d", 00:14:02.905 "strip_size_kb": 0, 00:14:02.905 "state": "online", 00:14:02.905 "raid_level": "raid1", 00:14:02.905 "superblock": true, 00:14:02.905 "num_base_bdevs": 2, 00:14:02.905 "num_base_bdevs_discovered": 1, 00:14:02.905 "num_base_bdevs_operational": 1, 00:14:02.905 "base_bdevs_list": [ 00:14:02.905 { 00:14:02.905 "name": null, 00:14:02.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.905 "is_configured": false, 00:14:02.905 "data_offset": 2048, 00:14:02.905 "data_size": 63488 00:14:02.905 }, 00:14:02.905 { 00:14:02.905 "name": "pt2", 00:14:02.905 "uuid": "fd265214-6025-52e6-852f-9777d5c7aa22", 00:14:02.905 "is_configured": true, 00:14:02.905 "data_offset": 2048, 00:14:02.905 "data_size": 63488 00:14:02.905 } 00:14:02.905 ] 00:14:02.905 }' 00:14:02.905 04:51:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:02.905 04:51:17 -- common/autotest_common.sh@10 -- # set +x 00:14:03.474 04:51:17 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:03.741 [2024-05-15 04:51:17.747576] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.741 [2024-05-15 04:51:17.747608] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.741 [2024-05-15 04:51:17.747664] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.741 [2024-05-15 04:51:17.747699] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.741 [2024-05-15 04:51:17.747708] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002cd80 name raid_bdev1, state offline 00:14:03.741 04:51:17 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:14:03.741 04:51:17 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.031 04:51:17 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:14:04.031 04:51:17 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:14:04.031 04:51:17 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:14:04.031 04:51:17 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:04.031 04:51:17 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:04.031 04:51:18 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:14:04.031 04:51:18 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:04.031 04:51:18 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:14:04.031 04:51:18 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:14:04.031 04:51:18 -- bdev/bdev_raid.sh@462 -- # i=1 00:14:04.031 04:51:18 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.307 [2024-05-15 04:51:18.267613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.307 [2024-05-15 04:51:18.267699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.307 [2024-05-15 04:51:18.267903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e880 00:14:04.307 [2024-05-15 04:51:18.267939] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.307 [2024-05-15 04:51:18.269493] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.307 [2024-05-15 04:51:18.269536] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.307 [2024-05-15 04:51:18.269621] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:04.307 [2024-05-15 04:51:18.269681] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.307 [2024-05-15 04:51:18.269763] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000030080 00:14:04.307 [2024-05-15 04:51:18.269772] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:04.307 [2024-05-15 04:51:18.269844] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:14:04.307 pt2 00:14:04.307 [2024-05-15 04:51:18.270010] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000030080 00:14:04.307 [2024-05-15 04:51:18.270023] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000030080 00:14:04.307 [2024-05-15 04:51:18.270109] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.307 04:51:18 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:04.307 04:51:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:04.307 04:51:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:04.307 04:51:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:04.307 04:51:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:04.308 04:51:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:04.308 04:51:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:04.308 04:51:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:04.308 04:51:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:04.308 04:51:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:04.308 04:51:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.308 04:51:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.308 04:51:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:04.308 "name": "raid_bdev1", 00:14:04.308 "uuid": "96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d", 00:14:04.308 "strip_size_kb": 0, 00:14:04.308 "state": "online", 00:14:04.308 "raid_level": "raid1", 00:14:04.308 "superblock": true, 00:14:04.308 "num_base_bdevs": 2, 00:14:04.308 "num_base_bdevs_discovered": 1, 00:14:04.308 "num_base_bdevs_operational": 1, 00:14:04.308 "base_bdevs_list": [ 00:14:04.308 { 00:14:04.308 "name": null, 00:14:04.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.308 "is_configured": false, 00:14:04.308 "data_offset": 2048, 00:14:04.308 "data_size": 63488 00:14:04.308 }, 00:14:04.308 { 00:14:04.308 "name": "pt2", 00:14:04.308 "uuid": "fd265214-6025-52e6-852f-9777d5c7aa22", 00:14:04.308 "is_configured": true, 00:14:04.308 "data_offset": 2048, 00:14:04.308 "data_size": 63488 00:14:04.308 } 00:14:04.308 ] 00:14:04.308 }' 00:14:04.308 04:51:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:04.308 04:51:18 -- common/autotest_common.sh@10 -- # set +x 00:14:04.876 04:51:19 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:14:05.135 04:51:19 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:05.135 04:51:19 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:14:05.135 [2024-05-15 04:51:19.307852] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.135 04:51:19 -- bdev/bdev_raid.sh@506 -- # '[' 96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d '!=' 96b83ddf-b328-4f6b-9f0b-9fa6e8deea0d ']' 00:14:05.135 04:51:19 -- bdev/bdev_raid.sh@511 -- # killprocess 48939 00:14:05.135 04:51:19 -- common/autotest_common.sh@926 -- # '[' -z 48939 ']' 00:14:05.135 04:51:19 -- common/autotest_common.sh@930 -- # kill -0 48939 00:14:05.135 04:51:19 -- common/autotest_common.sh@931 -- # uname 00:14:05.135 04:51:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:05.135 04:51:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 48939 00:14:05.135 killing process with pid 48939 00:14:05.135 04:51:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:05.135 04:51:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:05.135 04:51:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48939' 00:14:05.135 04:51:19 -- common/autotest_common.sh@945 -- # kill 48939 00:14:05.135 04:51:19 -- common/autotest_common.sh@950 -- # wait 48939 00:14:05.135 [2024-05-15 04:51:19.352887] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.135 [2024-05-15 04:51:19.352949] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.135 [2024-05-15 04:51:19.352979] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.135 [2024-05-15 04:51:19.352988] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000030080 name raid_bdev1, state offline 00:14:05.394 [2024-05-15 04:51:19.550871] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.308 ************************************ 00:14:07.308 END TEST raid_superblock_test 00:14:07.308 ************************************ 00:14:07.308 04:51:21 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:07.308 00:14:07.308 real 0m11.112s 00:14:07.308 user 0m18.434s 00:14:07.308 sys 0m1.423s 00:14:07.308 04:51:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.308 04:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:07.308 04:51:21 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:07.308 04:51:21 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:07.308 04:51:21 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:14:07.308 04:51:21 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:07.308 04:51:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:07.308 04:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:07.308 ************************************ 00:14:07.308 START TEST raid_state_function_test 00:14:07.308 ************************************ 00:14:07.308 04:51:21 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:14:07.308 04:51:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:07.308 04:51:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:07.309 Process raid pid: 49288 00:14:07.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=49288 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49288' 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49288 /var/tmp/spdk-raid.sock 00:14:07.309 04:51:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:07.309 04:51:21 -- common/autotest_common.sh@819 -- # '[' -z 49288 ']' 00:14:07.309 04:51:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:07.309 04:51:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:07.309 04:51:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:07.309 04:51:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:07.309 04:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:07.309 [2024-05-15 04:51:21.268449] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:07.309 [2024-05-15 04:51:21.268670] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.309 [2024-05-15 04:51:21.453732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.566 [2024-05-15 04:51:21.723653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.823 [2024-05-15 04:51:21.994147] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.758 04:51:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:08.758 04:51:22 -- common/autotest_common.sh@852 -- # return 0 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:08.758 [2024-05-15 04:51:22.840340] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.758 [2024-05-15 04:51:22.840409] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.758 [2024-05-15 04:51:22.840420] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.758 [2024-05-15 04:51:22.840455] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.758 [2024-05-15 04:51:22.840462] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.758 [2024-05-15 04:51:22.840506] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.758 04:51:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.016 04:51:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.016 "name": "Existed_Raid", 00:14:09.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.016 "strip_size_kb": 64, 00:14:09.016 "state": "configuring", 00:14:09.016 "raid_level": "raid0", 00:14:09.016 "superblock": false, 00:14:09.016 "num_base_bdevs": 3, 00:14:09.016 "num_base_bdevs_discovered": 0, 00:14:09.016 "num_base_bdevs_operational": 3, 00:14:09.016 "base_bdevs_list": [ 00:14:09.016 { 00:14:09.016 "name": "BaseBdev1", 00:14:09.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.016 "is_configured": false, 00:14:09.016 "data_offset": 0, 00:14:09.016 "data_size": 0 00:14:09.016 }, 00:14:09.016 { 00:14:09.016 "name": "BaseBdev2", 00:14:09.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.016 "is_configured": false, 00:14:09.016 "data_offset": 0, 00:14:09.016 "data_size": 0 00:14:09.016 }, 00:14:09.016 { 00:14:09.016 "name": "BaseBdev3", 00:14:09.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.016 "is_configured": false, 00:14:09.016 "data_offset": 0, 00:14:09.017 "data_size": 0 00:14:09.017 } 00:14:09.017 ] 00:14:09.017 }' 00:14:09.017 04:51:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.017 04:51:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.582 04:51:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:09.582 [2024-05-15 04:51:23.780420] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.582 [2024-05-15 04:51:23.780466] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:14:09.582 04:51:23 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:09.840 [2024-05-15 04:51:23.932422] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:09.840 [2024-05-15 04:51:23.932489] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:09.840 [2024-05-15 04:51:23.932499] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.840 [2024-05-15 04:51:23.932534] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.840 [2024-05-15 04:51:23.932541] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.840 [2024-05-15 04:51:23.932574] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.840 04:51:23 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:10.098 [2024-05-15 04:51:24.148280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.098 BaseBdev1 00:14:10.098 04:51:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:10.098 04:51:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:10.098 04:51:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:10.098 04:51:24 -- common/autotest_common.sh@889 -- # local i 00:14:10.098 04:51:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:10.098 04:51:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:10.098 04:51:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.098 04:51:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:10.355 [ 00:14:10.356 { 00:14:10.356 "name": "BaseBdev1", 00:14:10.356 "aliases": [ 00:14:10.356 "3a39e57e-8d41-42a0-b444-de32a5a9d4dd" 00:14:10.356 ], 00:14:10.356 "product_name": "Malloc disk", 00:14:10.356 "block_size": 512, 00:14:10.356 "num_blocks": 65536, 00:14:10.356 "uuid": "3a39e57e-8d41-42a0-b444-de32a5a9d4dd", 00:14:10.356 "assigned_rate_limits": { 00:14:10.356 "rw_ios_per_sec": 0, 00:14:10.356 "rw_mbytes_per_sec": 0, 00:14:10.356 "r_mbytes_per_sec": 0, 00:14:10.356 "w_mbytes_per_sec": 0 00:14:10.356 }, 00:14:10.356 "claimed": true, 00:14:10.356 "claim_type": "exclusive_write", 00:14:10.356 "zoned": false, 00:14:10.356 "supported_io_types": { 00:14:10.356 "read": true, 00:14:10.356 "write": true, 00:14:10.356 "unmap": true, 00:14:10.356 "write_zeroes": true, 00:14:10.356 "flush": true, 00:14:10.356 "reset": true, 00:14:10.356 "compare": false, 00:14:10.356 "compare_and_write": false, 00:14:10.356 "abort": true, 00:14:10.356 "nvme_admin": false, 00:14:10.356 "nvme_io": false 00:14:10.356 }, 00:14:10.356 "memory_domains": [ 00:14:10.356 { 00:14:10.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.356 "dma_device_type": 2 00:14:10.356 } 00:14:10.356 ], 00:14:10.356 "driver_specific": {} 00:14:10.356 } 00:14:10.356 ] 00:14:10.356 04:51:24 -- common/autotest_common.sh@895 -- # return 0 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.356 04:51:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.614 04:51:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.614 "name": "Existed_Raid", 00:14:10.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.614 "strip_size_kb": 64, 00:14:10.614 "state": "configuring", 00:14:10.614 "raid_level": "raid0", 00:14:10.614 "superblock": false, 00:14:10.614 "num_base_bdevs": 3, 00:14:10.614 "num_base_bdevs_discovered": 1, 00:14:10.614 "num_base_bdevs_operational": 3, 00:14:10.614 "base_bdevs_list": [ 00:14:10.614 { 00:14:10.614 "name": "BaseBdev1", 00:14:10.614 "uuid": "3a39e57e-8d41-42a0-b444-de32a5a9d4dd", 00:14:10.614 "is_configured": true, 00:14:10.614 "data_offset": 0, 00:14:10.614 "data_size": 65536 00:14:10.614 }, 00:14:10.614 { 00:14:10.614 "name": "BaseBdev2", 00:14:10.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.614 "is_configured": false, 00:14:10.614 "data_offset": 0, 00:14:10.614 "data_size": 0 00:14:10.614 }, 00:14:10.614 { 00:14:10.614 "name": "BaseBdev3", 00:14:10.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.614 "is_configured": false, 00:14:10.614 "data_offset": 0, 00:14:10.614 "data_size": 0 00:14:10.614 } 00:14:10.614 ] 00:14:10.614 }' 00:14:10.614 04:51:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.614 04:51:24 -- common/autotest_common.sh@10 -- # set +x 00:14:11.179 04:51:25 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:11.438 [2024-05-15 04:51:25.420353] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.438 [2024-05-15 04:51:25.420396] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:11.438 [2024-05-15 04:51:25.560428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.438 [2024-05-15 04:51:25.561671] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.438 [2024-05-15 04:51:25.561735] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.438 [2024-05-15 04:51:25.561746] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:11.438 [2024-05-15 04:51:25.561772] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.438 04:51:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.697 04:51:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:11.697 "name": "Existed_Raid", 00:14:11.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.697 "strip_size_kb": 64, 00:14:11.697 "state": "configuring", 00:14:11.697 "raid_level": "raid0", 00:14:11.697 "superblock": false, 00:14:11.697 "num_base_bdevs": 3, 00:14:11.697 "num_base_bdevs_discovered": 1, 00:14:11.697 "num_base_bdevs_operational": 3, 00:14:11.697 "base_bdevs_list": [ 00:14:11.697 { 00:14:11.697 "name": "BaseBdev1", 00:14:11.697 "uuid": "3a39e57e-8d41-42a0-b444-de32a5a9d4dd", 00:14:11.697 "is_configured": true, 00:14:11.697 "data_offset": 0, 00:14:11.697 "data_size": 65536 00:14:11.697 }, 00:14:11.697 { 00:14:11.697 "name": "BaseBdev2", 00:14:11.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.697 "is_configured": false, 00:14:11.697 "data_offset": 0, 00:14:11.697 "data_size": 0 00:14:11.697 }, 00:14:11.697 { 00:14:11.697 "name": "BaseBdev3", 00:14:11.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.697 "is_configured": false, 00:14:11.697 "data_offset": 0, 00:14:11.697 "data_size": 0 00:14:11.697 } 00:14:11.697 ] 00:14:11.697 }' 00:14:11.697 04:51:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:11.697 04:51:25 -- common/autotest_common.sh@10 -- # set +x 00:14:12.264 04:51:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.522 [2024-05-15 04:51:26.653566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.522 BaseBdev2 00:14:12.522 04:51:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:12.522 04:51:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:12.522 04:51:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:12.522 04:51:26 -- common/autotest_common.sh@889 -- # local i 00:14:12.522 04:51:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:12.522 04:51:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:12.522 04:51:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.780 04:51:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.780 [ 00:14:12.780 { 00:14:12.780 "name": "BaseBdev2", 00:14:12.780 "aliases": [ 00:14:12.780 "97e068d0-f9b7-46be-b4e8-4197d938538a" 00:14:12.780 ], 00:14:12.780 "product_name": "Malloc disk", 00:14:12.780 "block_size": 512, 00:14:12.780 "num_blocks": 65536, 00:14:12.780 "uuid": "97e068d0-f9b7-46be-b4e8-4197d938538a", 00:14:12.780 "assigned_rate_limits": { 00:14:12.780 "rw_ios_per_sec": 0, 00:14:12.780 "rw_mbytes_per_sec": 0, 00:14:12.780 "r_mbytes_per_sec": 0, 00:14:12.780 "w_mbytes_per_sec": 0 00:14:12.780 }, 00:14:12.780 "claimed": true, 00:14:12.780 "claim_type": "exclusive_write", 00:14:12.780 "zoned": false, 00:14:12.780 "supported_io_types": { 00:14:12.780 "read": true, 00:14:12.780 "write": true, 00:14:12.780 "unmap": true, 00:14:12.780 "write_zeroes": true, 00:14:12.780 "flush": true, 00:14:12.780 "reset": true, 00:14:12.780 "compare": false, 00:14:12.780 "compare_and_write": false, 00:14:12.780 "abort": true, 00:14:12.780 "nvme_admin": false, 00:14:12.780 "nvme_io": false 00:14:12.780 }, 00:14:12.780 "memory_domains": [ 00:14:12.780 { 00:14:12.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.780 "dma_device_type": 2 00:14:12.780 } 00:14:12.780 ], 00:14:12.780 "driver_specific": {} 00:14:12.780 } 00:14:12.780 ] 00:14:12.780 04:51:26 -- common/autotest_common.sh@895 -- # return 0 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.780 04:51:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.039 04:51:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.039 "name": "Existed_Raid", 00:14:13.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.039 "strip_size_kb": 64, 00:14:13.039 "state": "configuring", 00:14:13.039 "raid_level": "raid0", 00:14:13.039 "superblock": false, 00:14:13.039 "num_base_bdevs": 3, 00:14:13.039 "num_base_bdevs_discovered": 2, 00:14:13.039 "num_base_bdevs_operational": 3, 00:14:13.039 "base_bdevs_list": [ 00:14:13.039 { 00:14:13.039 "name": "BaseBdev1", 00:14:13.039 "uuid": "3a39e57e-8d41-42a0-b444-de32a5a9d4dd", 00:14:13.039 "is_configured": true, 00:14:13.039 "data_offset": 0, 00:14:13.039 "data_size": 65536 00:14:13.039 }, 00:14:13.039 { 00:14:13.039 "name": "BaseBdev2", 00:14:13.039 "uuid": "97e068d0-f9b7-46be-b4e8-4197d938538a", 00:14:13.039 "is_configured": true, 00:14:13.039 "data_offset": 0, 00:14:13.039 "data_size": 65536 00:14:13.039 }, 00:14:13.039 { 00:14:13.039 "name": "BaseBdev3", 00:14:13.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.039 "is_configured": false, 00:14:13.039 "data_offset": 0, 00:14:13.039 "data_size": 0 00:14:13.039 } 00:14:13.039 ] 00:14:13.039 }' 00:14:13.039 04:51:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.039 04:51:27 -- common/autotest_common.sh@10 -- # set +x 00:14:13.606 04:51:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:13.864 [2024-05-15 04:51:27.984270] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.864 [2024-05-15 04:51:27.984316] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:14:13.864 [2024-05-15 04:51:27.984324] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:13.864 [2024-05-15 04:51:27.984420] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:13.864 [2024-05-15 04:51:27.984616] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:14:13.864 [2024-05-15 04:51:27.984626] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:14:13.864 BaseBdev3 00:14:13.864 [2024-05-15 04:51:27.985115] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.864 04:51:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:13.864 04:51:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:14:13.864 04:51:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:13.864 04:51:28 -- common/autotest_common.sh@889 -- # local i 00:14:13.864 04:51:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:13.864 04:51:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:13.864 04:51:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.122 04:51:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:14.122 [ 00:14:14.122 { 00:14:14.122 "name": "BaseBdev3", 00:14:14.122 "aliases": [ 00:14:14.122 "5b6549b9-a713-4022-a3f2-cc63a0bcd97e" 00:14:14.123 ], 00:14:14.123 "product_name": "Malloc disk", 00:14:14.123 "block_size": 512, 00:14:14.123 "num_blocks": 65536, 00:14:14.123 "uuid": "5b6549b9-a713-4022-a3f2-cc63a0bcd97e", 00:14:14.123 "assigned_rate_limits": { 00:14:14.123 "rw_ios_per_sec": 0, 00:14:14.123 "rw_mbytes_per_sec": 0, 00:14:14.123 "r_mbytes_per_sec": 0, 00:14:14.123 "w_mbytes_per_sec": 0 00:14:14.123 }, 00:14:14.123 "claimed": true, 00:14:14.123 "claim_type": "exclusive_write", 00:14:14.123 "zoned": false, 00:14:14.123 "supported_io_types": { 00:14:14.123 "read": true, 00:14:14.123 "write": true, 00:14:14.123 "unmap": true, 00:14:14.123 "write_zeroes": true, 00:14:14.123 "flush": true, 00:14:14.123 "reset": true, 00:14:14.123 "compare": false, 00:14:14.123 "compare_and_write": false, 00:14:14.123 "abort": true, 00:14:14.123 "nvme_admin": false, 00:14:14.123 "nvme_io": false 00:14:14.123 }, 00:14:14.123 "memory_domains": [ 00:14:14.123 { 00:14:14.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.123 "dma_device_type": 2 00:14:14.123 } 00:14:14.123 ], 00:14:14.123 "driver_specific": {} 00:14:14.123 } 00:14:14.123 ] 00:14:14.123 04:51:28 -- common/autotest_common.sh@895 -- # return 0 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.123 04:51:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.381 04:51:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:14.381 "name": "Existed_Raid", 00:14:14.381 "uuid": "af3d7f75-254d-470d-987b-74e4eb9e3761", 00:14:14.381 "strip_size_kb": 64, 00:14:14.381 "state": "online", 00:14:14.381 "raid_level": "raid0", 00:14:14.381 "superblock": false, 00:14:14.381 "num_base_bdevs": 3, 00:14:14.381 "num_base_bdevs_discovered": 3, 00:14:14.381 "num_base_bdevs_operational": 3, 00:14:14.381 "base_bdevs_list": [ 00:14:14.381 { 00:14:14.381 "name": "BaseBdev1", 00:14:14.381 "uuid": "3a39e57e-8d41-42a0-b444-de32a5a9d4dd", 00:14:14.381 "is_configured": true, 00:14:14.381 "data_offset": 0, 00:14:14.381 "data_size": 65536 00:14:14.381 }, 00:14:14.381 { 00:14:14.381 "name": "BaseBdev2", 00:14:14.381 "uuid": "97e068d0-f9b7-46be-b4e8-4197d938538a", 00:14:14.381 "is_configured": true, 00:14:14.381 "data_offset": 0, 00:14:14.381 "data_size": 65536 00:14:14.381 }, 00:14:14.381 { 00:14:14.381 "name": "BaseBdev3", 00:14:14.381 "uuid": "5b6549b9-a713-4022-a3f2-cc63a0bcd97e", 00:14:14.381 "is_configured": true, 00:14:14.381 "data_offset": 0, 00:14:14.381 "data_size": 65536 00:14:14.381 } 00:14:14.381 ] 00:14:14.381 }' 00:14:14.381 04:51:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:14.381 04:51:28 -- common/autotest_common.sh@10 -- # set +x 00:14:14.948 04:51:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:15.206 [2024-05-15 04:51:29.340463] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:15.206 [2024-05-15 04:51:29.340491] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.206 [2024-05-15 04:51:29.340548] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.464 04:51:29 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.465 "name": "Existed_Raid", 00:14:15.465 "uuid": "af3d7f75-254d-470d-987b-74e4eb9e3761", 00:14:15.465 "strip_size_kb": 64, 00:14:15.465 "state": "offline", 00:14:15.465 "raid_level": "raid0", 00:14:15.465 "superblock": false, 00:14:15.465 "num_base_bdevs": 3, 00:14:15.465 "num_base_bdevs_discovered": 2, 00:14:15.465 "num_base_bdevs_operational": 2, 00:14:15.465 "base_bdevs_list": [ 00:14:15.465 { 00:14:15.465 "name": null, 00:14:15.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.465 "is_configured": false, 00:14:15.465 "data_offset": 0, 00:14:15.465 "data_size": 65536 00:14:15.465 }, 00:14:15.465 { 00:14:15.465 "name": "BaseBdev2", 00:14:15.465 "uuid": "97e068d0-f9b7-46be-b4e8-4197d938538a", 00:14:15.465 "is_configured": true, 00:14:15.465 "data_offset": 0, 00:14:15.465 "data_size": 65536 00:14:15.465 }, 00:14:15.465 { 00:14:15.465 "name": "BaseBdev3", 00:14:15.465 "uuid": "5b6549b9-a713-4022-a3f2-cc63a0bcd97e", 00:14:15.465 "is_configured": true, 00:14:15.465 "data_offset": 0, 00:14:15.465 "data_size": 65536 00:14:15.465 } 00:14:15.465 ] 00:14:15.465 }' 00:14:15.465 04:51:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.465 04:51:29 -- common/autotest_common.sh@10 -- # set +x 00:14:16.032 04:51:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:16.032 04:51:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:16.032 04:51:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.032 04:51:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:16.291 04:51:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:16.291 04:51:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:16.291 04:51:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:16.291 [2024-05-15 04:51:30.486574] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.549 04:51:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:16.549 04:51:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:16.549 04:51:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.549 04:51:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:16.808 04:51:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:16.808 04:51:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:16.808 04:51:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:16.808 [2024-05-15 04:51:30.956828] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:16.808 [2024-05-15 04:51:30.956876] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:14:17.070 04:51:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:17.070 04:51:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:17.070 04:51:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.070 04:51:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:17.070 04:51:31 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:17.070 04:51:31 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:17.070 04:51:31 -- bdev/bdev_raid.sh@287 -- # killprocess 49288 00:14:17.070 04:51:31 -- common/autotest_common.sh@926 -- # '[' -z 49288 ']' 00:14:17.070 04:51:31 -- common/autotest_common.sh@930 -- # kill -0 49288 00:14:17.070 04:51:31 -- common/autotest_common.sh@931 -- # uname 00:14:17.070 04:51:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:17.070 04:51:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 49288 00:14:17.070 killing process with pid 49288 00:14:17.070 04:51:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:17.070 04:51:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:17.070 04:51:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49288' 00:14:17.070 04:51:31 -- common/autotest_common.sh@945 -- # kill 49288 00:14:17.070 04:51:31 -- common/autotest_common.sh@950 -- # wait 49288 00:14:17.070 [2024-05-15 04:51:31.286894] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.070 [2024-05-15 04:51:31.287011] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.981 ************************************ 00:14:18.981 END TEST raid_state_function_test 00:14:18.981 ************************************ 00:14:18.981 04:51:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:18.981 00:14:18.981 real 0m11.618s 00:14:18.981 user 0m19.251s 00:14:18.981 sys 0m1.509s 00:14:18.981 04:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.981 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:14:18.981 04:51:32 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:18.981 04:51:32 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:18.981 04:51:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:18.981 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:14:18.981 ************************************ 00:14:18.981 START TEST raid_state_function_test_sb 00:14:18.981 ************************************ 00:14:18.981 04:51:32 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:14:18.981 04:51:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:18.981 04:51:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:18.981 04:51:32 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:18.982 Process raid pid: 49668 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=49668 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49668' 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49668 /var/tmp/spdk-raid.sock 00:14:18.982 04:51:32 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:18.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:18.982 04:51:32 -- common/autotest_common.sh@819 -- # '[' -z 49668 ']' 00:14:18.982 04:51:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:18.982 04:51:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:18.982 04:51:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:18.982 04:51:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:18.982 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:14:18.982 [2024-05-15 04:51:32.954738] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:18.982 [2024-05-15 04:51:32.954965] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.982 [2024-05-15 04:51:33.136748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.239 [2024-05-15 04:51:33.410613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.496 [2024-05-15 04:51:33.679716] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.432 04:51:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:20.432 04:51:34 -- common/autotest_common.sh@852 -- # return 0 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:20.432 [2024-05-15 04:51:34.616919] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:20.432 [2024-05-15 04:51:34.616988] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:20.432 [2024-05-15 04:51:34.616999] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.432 [2024-05-15 04:51:34.617034] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.432 [2024-05-15 04:51:34.617042] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.432 [2024-05-15 04:51:34.617088] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.432 04:51:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.691 04:51:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:20.691 "name": "Existed_Raid", 00:14:20.691 "uuid": "b3a3e421-54f4-488f-8bb5-47de76027ef2", 00:14:20.691 "strip_size_kb": 64, 00:14:20.691 "state": "configuring", 00:14:20.691 "raid_level": "raid0", 00:14:20.691 "superblock": true, 00:14:20.691 "num_base_bdevs": 3, 00:14:20.691 "num_base_bdevs_discovered": 0, 00:14:20.691 "num_base_bdevs_operational": 3, 00:14:20.691 "base_bdevs_list": [ 00:14:20.691 { 00:14:20.691 "name": "BaseBdev1", 00:14:20.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.691 "is_configured": false, 00:14:20.691 "data_offset": 0, 00:14:20.691 "data_size": 0 00:14:20.691 }, 00:14:20.691 { 00:14:20.691 "name": "BaseBdev2", 00:14:20.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.691 "is_configured": false, 00:14:20.691 "data_offset": 0, 00:14:20.691 "data_size": 0 00:14:20.691 }, 00:14:20.691 { 00:14:20.691 "name": "BaseBdev3", 00:14:20.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.691 "is_configured": false, 00:14:20.691 "data_offset": 0, 00:14:20.691 "data_size": 0 00:14:20.691 } 00:14:20.691 ] 00:14:20.691 }' 00:14:20.691 04:51:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:20.691 04:51:34 -- common/autotest_common.sh@10 -- # set +x 00:14:21.258 04:51:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:21.258 [2024-05-15 04:51:35.472900] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.258 [2024-05-15 04:51:35.472942] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:14:21.258 04:51:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:21.518 [2024-05-15 04:51:35.685009] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.518 [2024-05-15 04:51:35.685072] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.518 [2024-05-15 04:51:35.685083] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.518 [2024-05-15 04:51:35.685117] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.518 [2024-05-15 04:51:35.685125] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.518 [2024-05-15 04:51:35.685158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:21.518 04:51:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.778 [2024-05-15 04:51:35.876541] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.778 BaseBdev1 00:14:21.778 04:51:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:21.778 04:51:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:21.778 04:51:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:21.778 04:51:35 -- common/autotest_common.sh@889 -- # local i 00:14:21.778 04:51:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:21.778 04:51:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:21.778 04:51:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.037 04:51:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.037 [ 00:14:22.037 { 00:14:22.037 "name": "BaseBdev1", 00:14:22.037 "aliases": [ 00:14:22.037 "f3b794a3-689a-4129-9c8b-31bca24f4b88" 00:14:22.037 ], 00:14:22.037 "product_name": "Malloc disk", 00:14:22.037 "block_size": 512, 00:14:22.037 "num_blocks": 65536, 00:14:22.037 "uuid": "f3b794a3-689a-4129-9c8b-31bca24f4b88", 00:14:22.037 "assigned_rate_limits": { 00:14:22.037 "rw_ios_per_sec": 0, 00:14:22.037 "rw_mbytes_per_sec": 0, 00:14:22.037 "r_mbytes_per_sec": 0, 00:14:22.037 "w_mbytes_per_sec": 0 00:14:22.037 }, 00:14:22.037 "claimed": true, 00:14:22.037 "claim_type": "exclusive_write", 00:14:22.037 "zoned": false, 00:14:22.037 "supported_io_types": { 00:14:22.037 "read": true, 00:14:22.037 "write": true, 00:14:22.037 "unmap": true, 00:14:22.037 "write_zeroes": true, 00:14:22.037 "flush": true, 00:14:22.037 "reset": true, 00:14:22.037 "compare": false, 00:14:22.037 "compare_and_write": false, 00:14:22.037 "abort": true, 00:14:22.037 "nvme_admin": false, 00:14:22.037 "nvme_io": false 00:14:22.037 }, 00:14:22.037 "memory_domains": [ 00:14:22.037 { 00:14:22.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.037 "dma_device_type": 2 00:14:22.037 } 00:14:22.037 ], 00:14:22.037 "driver_specific": {} 00:14:22.037 } 00:14:22.037 ] 00:14:22.037 04:51:36 -- common/autotest_common.sh@895 -- # return 0 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.037 04:51:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.296 04:51:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.296 "name": "Existed_Raid", 00:14:22.296 "uuid": "050d258c-76e9-432b-9108-62fc93d90dc6", 00:14:22.296 "strip_size_kb": 64, 00:14:22.296 "state": "configuring", 00:14:22.296 "raid_level": "raid0", 00:14:22.296 "superblock": true, 00:14:22.296 "num_base_bdevs": 3, 00:14:22.296 "num_base_bdevs_discovered": 1, 00:14:22.296 "num_base_bdevs_operational": 3, 00:14:22.296 "base_bdevs_list": [ 00:14:22.296 { 00:14:22.296 "name": "BaseBdev1", 00:14:22.296 "uuid": "f3b794a3-689a-4129-9c8b-31bca24f4b88", 00:14:22.296 "is_configured": true, 00:14:22.296 "data_offset": 2048, 00:14:22.296 "data_size": 63488 00:14:22.296 }, 00:14:22.296 { 00:14:22.296 "name": "BaseBdev2", 00:14:22.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.296 "is_configured": false, 00:14:22.296 "data_offset": 0, 00:14:22.296 "data_size": 0 00:14:22.296 }, 00:14:22.296 { 00:14:22.296 "name": "BaseBdev3", 00:14:22.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.296 "is_configured": false, 00:14:22.296 "data_offset": 0, 00:14:22.296 "data_size": 0 00:14:22.296 } 00:14:22.296 ] 00:14:22.296 }' 00:14:22.296 04:51:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.296 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:14:22.865 04:51:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:23.124 [2024-05-15 04:51:37.124646] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.124 [2024-05-15 04:51:37.124694] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:14:23.124 04:51:37 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:23.124 04:51:37 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:23.383 04:51:37 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.383 BaseBdev1 00:14:23.383 04:51:37 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:23.383 04:51:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:23.383 04:51:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:23.383 04:51:37 -- common/autotest_common.sh@889 -- # local i 00:14:23.383 04:51:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:23.383 04:51:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:23.383 04:51:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:23.641 04:51:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.641 [ 00:14:23.641 { 00:14:23.641 "name": "BaseBdev1", 00:14:23.641 "aliases": [ 00:14:23.641 "1fca1b1b-ec55-430d-8146-89f363c7fa63" 00:14:23.641 ], 00:14:23.641 "product_name": "Malloc disk", 00:14:23.641 "block_size": 512, 00:14:23.641 "num_blocks": 65536, 00:14:23.641 "uuid": "1fca1b1b-ec55-430d-8146-89f363c7fa63", 00:14:23.641 "assigned_rate_limits": { 00:14:23.641 "rw_ios_per_sec": 0, 00:14:23.641 "rw_mbytes_per_sec": 0, 00:14:23.641 "r_mbytes_per_sec": 0, 00:14:23.641 "w_mbytes_per_sec": 0 00:14:23.641 }, 00:14:23.641 "claimed": false, 00:14:23.641 "zoned": false, 00:14:23.641 "supported_io_types": { 00:14:23.641 "read": true, 00:14:23.641 "write": true, 00:14:23.641 "unmap": true, 00:14:23.641 "write_zeroes": true, 00:14:23.641 "flush": true, 00:14:23.641 "reset": true, 00:14:23.641 "compare": false, 00:14:23.641 "compare_and_write": false, 00:14:23.641 "abort": true, 00:14:23.641 "nvme_admin": false, 00:14:23.641 "nvme_io": false 00:14:23.641 }, 00:14:23.641 "memory_domains": [ 00:14:23.641 { 00:14:23.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.641 "dma_device_type": 2 00:14:23.641 } 00:14:23.641 ], 00:14:23.641 "driver_specific": {} 00:14:23.641 } 00:14:23.641 ] 00:14:23.901 04:51:37 -- common/autotest_common.sh@895 -- # return 0 00:14:23.901 04:51:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:23.901 [2024-05-15 04:51:38.000802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.901 [2024-05-15 04:51:38.002110] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.901 [2024-05-15 04:51:38.002164] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.901 [2024-05-15 04:51:38.002174] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:23.901 [2024-05-15 04:51:38.002198] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.901 04:51:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.160 04:51:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:24.160 "name": "Existed_Raid", 00:14:24.160 "uuid": "1abea385-98fc-4e04-a182-daa1feb407ab", 00:14:24.160 "strip_size_kb": 64, 00:14:24.160 "state": "configuring", 00:14:24.160 "raid_level": "raid0", 00:14:24.160 "superblock": true, 00:14:24.160 "num_base_bdevs": 3, 00:14:24.160 "num_base_bdevs_discovered": 1, 00:14:24.160 "num_base_bdevs_operational": 3, 00:14:24.160 "base_bdevs_list": [ 00:14:24.160 { 00:14:24.160 "name": "BaseBdev1", 00:14:24.160 "uuid": "1fca1b1b-ec55-430d-8146-89f363c7fa63", 00:14:24.160 "is_configured": true, 00:14:24.160 "data_offset": 2048, 00:14:24.160 "data_size": 63488 00:14:24.160 }, 00:14:24.160 { 00:14:24.160 "name": "BaseBdev2", 00:14:24.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.160 "is_configured": false, 00:14:24.160 "data_offset": 0, 00:14:24.160 "data_size": 0 00:14:24.160 }, 00:14:24.160 { 00:14:24.160 "name": "BaseBdev3", 00:14:24.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.160 "is_configured": false, 00:14:24.160 "data_offset": 0, 00:14:24.160 "data_size": 0 00:14:24.160 } 00:14:24.160 ] 00:14:24.160 }' 00:14:24.160 04:51:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:24.160 04:51:38 -- common/autotest_common.sh@10 -- # set +x 00:14:24.728 04:51:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:24.986 BaseBdev2 00:14:24.986 [2024-05-15 04:51:39.045922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.986 04:51:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:24.986 04:51:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:24.986 04:51:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:24.986 04:51:39 -- common/autotest_common.sh@889 -- # local i 00:14:24.986 04:51:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:24.986 04:51:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:24.986 04:51:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:24.987 04:51:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:25.245 [ 00:14:25.245 { 00:14:25.245 "name": "BaseBdev2", 00:14:25.245 "aliases": [ 00:14:25.245 "df5b18c6-d37d-449f-ba98-05ff2b691e6d" 00:14:25.245 ], 00:14:25.245 "product_name": "Malloc disk", 00:14:25.245 "block_size": 512, 00:14:25.246 "num_blocks": 65536, 00:14:25.246 "uuid": "df5b18c6-d37d-449f-ba98-05ff2b691e6d", 00:14:25.246 "assigned_rate_limits": { 00:14:25.246 "rw_ios_per_sec": 0, 00:14:25.246 "rw_mbytes_per_sec": 0, 00:14:25.246 "r_mbytes_per_sec": 0, 00:14:25.246 "w_mbytes_per_sec": 0 00:14:25.246 }, 00:14:25.246 "claimed": true, 00:14:25.246 "claim_type": "exclusive_write", 00:14:25.246 "zoned": false, 00:14:25.246 "supported_io_types": { 00:14:25.246 "read": true, 00:14:25.246 "write": true, 00:14:25.246 "unmap": true, 00:14:25.246 "write_zeroes": true, 00:14:25.246 "flush": true, 00:14:25.246 "reset": true, 00:14:25.246 "compare": false, 00:14:25.246 "compare_and_write": false, 00:14:25.246 "abort": true, 00:14:25.246 "nvme_admin": false, 00:14:25.246 "nvme_io": false 00:14:25.246 }, 00:14:25.246 "memory_domains": [ 00:14:25.246 { 00:14:25.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.246 "dma_device_type": 2 00:14:25.246 } 00:14:25.246 ], 00:14:25.246 "driver_specific": {} 00:14:25.246 } 00:14:25.246 ] 00:14:25.246 04:51:39 -- common/autotest_common.sh@895 -- # return 0 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.246 04:51:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.505 04:51:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.505 "name": "Existed_Raid", 00:14:25.505 "uuid": "1abea385-98fc-4e04-a182-daa1feb407ab", 00:14:25.505 "strip_size_kb": 64, 00:14:25.505 "state": "configuring", 00:14:25.505 "raid_level": "raid0", 00:14:25.505 "superblock": true, 00:14:25.505 "num_base_bdevs": 3, 00:14:25.505 "num_base_bdevs_discovered": 2, 00:14:25.505 "num_base_bdevs_operational": 3, 00:14:25.505 "base_bdevs_list": [ 00:14:25.505 { 00:14:25.505 "name": "BaseBdev1", 00:14:25.505 "uuid": "1fca1b1b-ec55-430d-8146-89f363c7fa63", 00:14:25.505 "is_configured": true, 00:14:25.505 "data_offset": 2048, 00:14:25.505 "data_size": 63488 00:14:25.505 }, 00:14:25.505 { 00:14:25.505 "name": "BaseBdev2", 00:14:25.505 "uuid": "df5b18c6-d37d-449f-ba98-05ff2b691e6d", 00:14:25.505 "is_configured": true, 00:14:25.505 "data_offset": 2048, 00:14:25.505 "data_size": 63488 00:14:25.505 }, 00:14:25.505 { 00:14:25.505 "name": "BaseBdev3", 00:14:25.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.505 "is_configured": false, 00:14:25.505 "data_offset": 0, 00:14:25.505 "data_size": 0 00:14:25.505 } 00:14:25.505 ] 00:14:25.505 }' 00:14:25.505 04:51:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.505 04:51:39 -- common/autotest_common.sh@10 -- # set +x 00:14:26.072 04:51:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:26.332 [2024-05-15 04:51:40.395169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.332 [2024-05-15 04:51:40.395348] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:14:26.332 [2024-05-15 04:51:40.395362] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:26.332 [2024-05-15 04:51:40.395440] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:26.332 [2024-05-15 04:51:40.395631] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:14:26.332 [2024-05-15 04:51:40.395641] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:14:26.332 BaseBdev3 00:14:26.332 [2024-05-15 04:51:40.396017] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.332 04:51:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:26.332 04:51:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:14:26.332 04:51:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:26.332 04:51:40 -- common/autotest_common.sh@889 -- # local i 00:14:26.332 04:51:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:26.332 04:51:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:26.332 04:51:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:26.591 04:51:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:26.591 [ 00:14:26.591 { 00:14:26.591 "name": "BaseBdev3", 00:14:26.591 "aliases": [ 00:14:26.591 "3da4ae4d-bcf5-4f72-9922-9dd8c0cb1161" 00:14:26.591 ], 00:14:26.591 "product_name": "Malloc disk", 00:14:26.591 "block_size": 512, 00:14:26.591 "num_blocks": 65536, 00:14:26.591 "uuid": "3da4ae4d-bcf5-4f72-9922-9dd8c0cb1161", 00:14:26.591 "assigned_rate_limits": { 00:14:26.591 "rw_ios_per_sec": 0, 00:14:26.591 "rw_mbytes_per_sec": 0, 00:14:26.591 "r_mbytes_per_sec": 0, 00:14:26.591 "w_mbytes_per_sec": 0 00:14:26.591 }, 00:14:26.591 "claimed": true, 00:14:26.591 "claim_type": "exclusive_write", 00:14:26.591 "zoned": false, 00:14:26.591 "supported_io_types": { 00:14:26.591 "read": true, 00:14:26.591 "write": true, 00:14:26.591 "unmap": true, 00:14:26.591 "write_zeroes": true, 00:14:26.591 "flush": true, 00:14:26.591 "reset": true, 00:14:26.591 "compare": false, 00:14:26.591 "compare_and_write": false, 00:14:26.591 "abort": true, 00:14:26.591 "nvme_admin": false, 00:14:26.591 "nvme_io": false 00:14:26.591 }, 00:14:26.591 "memory_domains": [ 00:14:26.591 { 00:14:26.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.591 "dma_device_type": 2 00:14:26.591 } 00:14:26.591 ], 00:14:26.591 "driver_specific": {} 00:14:26.591 } 00:14:26.591 ] 00:14:26.591 04:51:40 -- common/autotest_common.sh@895 -- # return 0 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.591 04:51:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.851 04:51:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:26.851 "name": "Existed_Raid", 00:14:26.851 "uuid": "1abea385-98fc-4e04-a182-daa1feb407ab", 00:14:26.851 "strip_size_kb": 64, 00:14:26.851 "state": "online", 00:14:26.851 "raid_level": "raid0", 00:14:26.851 "superblock": true, 00:14:26.851 "num_base_bdevs": 3, 00:14:26.851 "num_base_bdevs_discovered": 3, 00:14:26.851 "num_base_bdevs_operational": 3, 00:14:26.851 "base_bdevs_list": [ 00:14:26.851 { 00:14:26.851 "name": "BaseBdev1", 00:14:26.851 "uuid": "1fca1b1b-ec55-430d-8146-89f363c7fa63", 00:14:26.851 "is_configured": true, 00:14:26.851 "data_offset": 2048, 00:14:26.851 "data_size": 63488 00:14:26.851 }, 00:14:26.851 { 00:14:26.851 "name": "BaseBdev2", 00:14:26.851 "uuid": "df5b18c6-d37d-449f-ba98-05ff2b691e6d", 00:14:26.851 "is_configured": true, 00:14:26.851 "data_offset": 2048, 00:14:26.851 "data_size": 63488 00:14:26.851 }, 00:14:26.851 { 00:14:26.851 "name": "BaseBdev3", 00:14:26.851 "uuid": "3da4ae4d-bcf5-4f72-9922-9dd8c0cb1161", 00:14:26.851 "is_configured": true, 00:14:26.851 "data_offset": 2048, 00:14:26.851 "data_size": 63488 00:14:26.851 } 00:14:26.851 ] 00:14:26.851 }' 00:14:26.851 04:51:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:26.851 04:51:40 -- common/autotest_common.sh@10 -- # set +x 00:14:27.419 04:51:41 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:27.678 [2024-05-15 04:51:41.707363] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.678 [2024-05-15 04:51:41.707390] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.678 [2024-05-15 04:51:41.707432] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.678 04:51:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.936 04:51:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:27.936 "name": "Existed_Raid", 00:14:27.936 "uuid": "1abea385-98fc-4e04-a182-daa1feb407ab", 00:14:27.936 "strip_size_kb": 64, 00:14:27.936 "state": "offline", 00:14:27.936 "raid_level": "raid0", 00:14:27.936 "superblock": true, 00:14:27.936 "num_base_bdevs": 3, 00:14:27.936 "num_base_bdevs_discovered": 2, 00:14:27.936 "num_base_bdevs_operational": 2, 00:14:27.936 "base_bdevs_list": [ 00:14:27.936 { 00:14:27.936 "name": null, 00:14:27.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.936 "is_configured": false, 00:14:27.936 "data_offset": 2048, 00:14:27.936 "data_size": 63488 00:14:27.936 }, 00:14:27.936 { 00:14:27.936 "name": "BaseBdev2", 00:14:27.936 "uuid": "df5b18c6-d37d-449f-ba98-05ff2b691e6d", 00:14:27.936 "is_configured": true, 00:14:27.936 "data_offset": 2048, 00:14:27.936 "data_size": 63488 00:14:27.936 }, 00:14:27.936 { 00:14:27.936 "name": "BaseBdev3", 00:14:27.936 "uuid": "3da4ae4d-bcf5-4f72-9922-9dd8c0cb1161", 00:14:27.936 "is_configured": true, 00:14:27.936 "data_offset": 2048, 00:14:27.936 "data_size": 63488 00:14:27.936 } 00:14:27.936 ] 00:14:27.936 }' 00:14:27.936 04:51:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:27.936 04:51:42 -- common/autotest_common.sh@10 -- # set +x 00:14:28.503 04:51:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:28.503 04:51:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:28.503 04:51:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:28.503 04:51:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.761 04:51:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:28.761 04:51:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:28.761 04:51:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:28.761 [2024-05-15 04:51:42.953040] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.019 04:51:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:29.019 04:51:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:29.019 04:51:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:29.019 04:51:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.277 04:51:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:29.277 04:51:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:29.277 04:51:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:29.277 [2024-05-15 04:51:43.464556] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:29.277 [2024-05-15 04:51:43.464606] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:14:29.535 04:51:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:29.536 04:51:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:29.536 04:51:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.536 04:51:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:29.536 04:51:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:29.536 04:51:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:29.536 04:51:43 -- bdev/bdev_raid.sh@287 -- # killprocess 49668 00:14:29.536 04:51:43 -- common/autotest_common.sh@926 -- # '[' -z 49668 ']' 00:14:29.536 04:51:43 -- common/autotest_common.sh@930 -- # kill -0 49668 00:14:29.536 04:51:43 -- common/autotest_common.sh@931 -- # uname 00:14:29.536 04:51:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:29.536 04:51:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 49668 00:14:29.536 killing process with pid 49668 00:14:29.536 04:51:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:29.536 04:51:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:29.536 04:51:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49668' 00:14:29.536 04:51:43 -- common/autotest_common.sh@945 -- # kill 49668 00:14:29.536 04:51:43 -- common/autotest_common.sh@950 -- # wait 49668 00:14:29.536 [2024-05-15 04:51:43.758188] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.536 [2024-05-15 04:51:43.758286] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:31.452 00:14:31.452 real 0m12.411s 00:14:31.452 user 0m20.518s 00:14:31.452 sys 0m1.642s 00:14:31.452 04:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.452 ************************************ 00:14:31.452 END TEST raid_state_function_test_sb 00:14:31.452 ************************************ 00:14:31.452 04:51:45 -- common/autotest_common.sh@10 -- # set +x 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:31.452 04:51:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:31.452 04:51:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:31.452 04:51:45 -- common/autotest_common.sh@10 -- # set +x 00:14:31.452 ************************************ 00:14:31.452 START TEST raid_superblock_test 00:14:31.452 ************************************ 00:14:31.452 04:51:45 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:31.452 04:51:45 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:31.453 04:51:45 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:31.453 04:51:45 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:31.453 04:51:45 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:31.453 04:51:45 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:31.453 04:51:45 -- bdev/bdev_raid.sh@357 -- # raid_pid=50054 00:14:31.453 04:51:45 -- bdev/bdev_raid.sh@358 -- # waitforlisten 50054 /var/tmp/spdk-raid.sock 00:14:31.453 04:51:45 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:31.453 04:51:45 -- common/autotest_common.sh@819 -- # '[' -z 50054 ']' 00:14:31.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:31.453 04:51:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:31.453 04:51:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:31.453 04:51:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:31.453 04:51:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:31.453 04:51:45 -- common/autotest_common.sh@10 -- # set +x 00:14:31.453 [2024-05-15 04:51:45.421368] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:31.453 [2024-05-15 04:51:45.421599] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50054 ] 00:14:31.453 [2024-05-15 04:51:45.603462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.711 [2024-05-15 04:51:45.881929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.970 [2024-05-15 04:51:46.143365] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.907 04:51:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:32.907 04:51:46 -- common/autotest_common.sh@852 -- # return 0 00:14:32.907 04:51:46 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:32.907 04:51:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:32.907 04:51:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:32.907 04:51:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:32.907 04:51:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:32.907 04:51:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:32.907 04:51:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:32.907 04:51:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:32.907 04:51:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:32.907 malloc1 00:14:32.907 04:51:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:33.166 [2024-05-15 04:51:47.313763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:33.166 [2024-05-15 04:51:47.313845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.166 [2024-05-15 04:51:47.313898] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:14:33.166 [2024-05-15 04:51:47.313940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.166 [2024-05-15 04:51:47.315488] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.166 [2024-05-15 04:51:47.315525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:33.166 pt1 00:14:33.166 04:51:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:33.166 04:51:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:33.166 04:51:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:33.166 04:51:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:33.166 04:51:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:33.166 04:51:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.166 04:51:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.166 04:51:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.166 04:51:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:33.424 malloc2 00:14:33.424 04:51:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:33.682 [2024-05-15 04:51:47.777333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:33.682 [2024-05-15 04:51:47.777409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.682 [2024-05-15 04:51:47.777452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:14:33.682 [2024-05-15 04:51:47.777490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.682 [2024-05-15 04:51:47.779301] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.682 [2024-05-15 04:51:47.779336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:33.682 pt2 00:14:33.682 04:51:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:33.682 04:51:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:33.682 04:51:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:14:33.682 04:51:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:14:33.682 04:51:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:33.682 04:51:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.682 04:51:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.682 04:51:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.682 04:51:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:33.941 malloc3 00:14:33.941 04:51:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:33.941 [2024-05-15 04:51:48.106844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:33.941 [2024-05-15 04:51:48.106930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.941 [2024-05-15 04:51:48.106973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:14:33.941 [2024-05-15 04:51:48.107007] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.941 [2024-05-15 04:51:48.108415] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.941 [2024-05-15 04:51:48.108452] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:33.941 pt3 00:14:33.941 04:51:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:33.941 04:51:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:33.941 04:51:48 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:34.200 [2024-05-15 04:51:48.246930] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:34.200 [2024-05-15 04:51:48.247930] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:34.200 [2024-05-15 04:51:48.247964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:34.200 [2024-05-15 04:51:48.248049] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002c180 00:14:34.201 [2024-05-15 04:51:48.248059] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:34.201 [2024-05-15 04:51:48.248161] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:14:34.201 [2024-05-15 04:51:48.248364] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002c180 00:14:34.201 [2024-05-15 04:51:48.248373] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002c180 00:14:34.201 [2024-05-15 04:51:48.248461] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.201 04:51:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.460 04:51:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.460 "name": "raid_bdev1", 00:14:34.460 "uuid": "00429cd8-99da-4388-8596-1e0c6a014888", 00:14:34.460 "strip_size_kb": 64, 00:14:34.460 "state": "online", 00:14:34.460 "raid_level": "raid0", 00:14:34.460 "superblock": true, 00:14:34.460 "num_base_bdevs": 3, 00:14:34.460 "num_base_bdevs_discovered": 3, 00:14:34.460 "num_base_bdevs_operational": 3, 00:14:34.460 "base_bdevs_list": [ 00:14:34.460 { 00:14:34.460 "name": "pt1", 00:14:34.460 "uuid": "456c5a4b-5d97-5f9d-8272-12376ffd483b", 00:14:34.460 "is_configured": true, 00:14:34.460 "data_offset": 2048, 00:14:34.460 "data_size": 63488 00:14:34.460 }, 00:14:34.460 { 00:14:34.460 "name": "pt2", 00:14:34.460 "uuid": "363f04a0-a42d-524b-bc00-23afcf75ca37", 00:14:34.460 "is_configured": true, 00:14:34.460 "data_offset": 2048, 00:14:34.460 "data_size": 63488 00:14:34.460 }, 00:14:34.460 { 00:14:34.460 "name": "pt3", 00:14:34.460 "uuid": "06b08305-16f0-5c0f-999d-bd28cae99278", 00:14:34.460 "is_configured": true, 00:14:34.460 "data_offset": 2048, 00:14:34.460 "data_size": 63488 00:14:34.460 } 00:14:34.460 ] 00:14:34.460 }' 00:14:34.460 04:51:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.460 04:51:48 -- common/autotest_common.sh@10 -- # set +x 00:14:35.028 04:51:49 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:35.028 04:51:49 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:35.287 [2024-05-15 04:51:49.315129] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.287 04:51:49 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=00429cd8-99da-4388-8596-1e0c6a014888 00:14:35.287 04:51:49 -- bdev/bdev_raid.sh@380 -- # '[' -z 00429cd8-99da-4388-8596-1e0c6a014888 ']' 00:14:35.287 04:51:49 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:35.546 [2024-05-15 04:51:49.535034] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.546 [2024-05-15 04:51:49.535059] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.546 [2024-05-15 04:51:49.535125] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.546 [2024-05-15 04:51:49.535173] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.546 [2024-05-15 04:51:49.535184] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c180 name raid_bdev1, state offline 00:14:35.546 04:51:49 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.546 04:51:49 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:35.805 04:51:49 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:35.805 04:51:49 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:35.805 04:51:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.805 04:51:49 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:35.805 04:51:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.805 04:51:49 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:36.064 04:51:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.064 04:51:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:36.323 04:51:50 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:36.323 04:51:50 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:36.323 04:51:50 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:36.323 04:51:50 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:36.323 04:51:50 -- common/autotest_common.sh@640 -- # local es=0 00:14:36.323 04:51:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:36.323 04:51:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.323 04:51:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.323 04:51:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.323 04:51:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.323 04:51:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.323 04:51:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.323 04:51:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.323 04:51:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:36.323 04:51:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:36.582 [2024-05-15 04:51:50.715147] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:36.582 [2024-05-15 04:51:50.716726] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:36.582 [2024-05-15 04:51:50.716763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:36.582 [2024-05-15 04:51:50.716806] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:36.582 [2024-05-15 04:51:50.716870] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:36.582 [2024-05-15 04:51:50.716901] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:14:36.582 [2024-05-15 04:51:50.716941] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.582 [2024-05-15 04:51:50.716953] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c780 name raid_bdev1, state configuring 00:14:36.582 request: 00:14:36.582 { 00:14:36.582 "name": "raid_bdev1", 00:14:36.582 "raid_level": "raid0", 00:14:36.582 "base_bdevs": [ 00:14:36.582 "malloc1", 00:14:36.582 "malloc2", 00:14:36.582 "malloc3" 00:14:36.582 ], 00:14:36.582 "superblock": false, 00:14:36.582 "strip_size_kb": 64, 00:14:36.582 "method": "bdev_raid_create", 00:14:36.582 "req_id": 1 00:14:36.582 } 00:14:36.582 Got JSON-RPC error response 00:14:36.582 response: 00:14:36.582 { 00:14:36.582 "code": -17, 00:14:36.582 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:36.582 } 00:14:36.582 04:51:50 -- common/autotest_common.sh@643 -- # es=1 00:14:36.582 04:51:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:36.582 04:51:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:36.582 04:51:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:36.582 04:51:50 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.582 04:51:50 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:36.840 04:51:50 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:36.840 04:51:50 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:36.840 04:51:50 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:36.840 [2024-05-15 04:51:51.019241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:36.840 [2024-05-15 04:51:51.019337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.840 [2024-05-15 04:51:51.019407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:14:36.840 [2024-05-15 04:51:51.019443] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.840 [2024-05-15 04:51:51.021782] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.840 [2024-05-15 04:51:51.021830] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:36.840 [2024-05-15 04:51:51.021973] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:36.840 [2024-05-15 04:51:51.022046] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:36.840 pt1 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.840 04:51:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.100 04:51:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.100 "name": "raid_bdev1", 00:14:37.100 "uuid": "00429cd8-99da-4388-8596-1e0c6a014888", 00:14:37.100 "strip_size_kb": 64, 00:14:37.100 "state": "configuring", 00:14:37.100 "raid_level": "raid0", 00:14:37.100 "superblock": true, 00:14:37.100 "num_base_bdevs": 3, 00:14:37.100 "num_base_bdevs_discovered": 1, 00:14:37.100 "num_base_bdevs_operational": 3, 00:14:37.100 "base_bdevs_list": [ 00:14:37.100 { 00:14:37.100 "name": "pt1", 00:14:37.100 "uuid": "456c5a4b-5d97-5f9d-8272-12376ffd483b", 00:14:37.100 "is_configured": true, 00:14:37.100 "data_offset": 2048, 00:14:37.100 "data_size": 63488 00:14:37.100 }, 00:14:37.100 { 00:14:37.100 "name": null, 00:14:37.100 "uuid": "363f04a0-a42d-524b-bc00-23afcf75ca37", 00:14:37.100 "is_configured": false, 00:14:37.100 "data_offset": 2048, 00:14:37.100 "data_size": 63488 00:14:37.100 }, 00:14:37.100 { 00:14:37.100 "name": null, 00:14:37.100 "uuid": "06b08305-16f0-5c0f-999d-bd28cae99278", 00:14:37.100 "is_configured": false, 00:14:37.100 "data_offset": 2048, 00:14:37.100 "data_size": 63488 00:14:37.100 } 00:14:37.100 ] 00:14:37.100 }' 00:14:37.100 04:51:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.100 04:51:51 -- common/autotest_common.sh@10 -- # set +x 00:14:37.667 04:51:51 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:14:37.667 04:51:51 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.926 [2024-05-15 04:51:51.919314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.926 [2024-05-15 04:51:51.919424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.926 [2024-05-15 04:51:51.919479] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f480 00:14:37.926 [2024-05-15 04:51:51.919503] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.926 [2024-05-15 04:51:51.920115] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.926 [2024-05-15 04:51:51.920151] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.926 [2024-05-15 04:51:51.920257] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:37.926 [2024-05-15 04:51:51.920280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.926 pt2 00:14:37.927 04:51:51 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:37.927 [2024-05-15 04:51:52.147297] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:38.185 04:51:52 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:38.185 04:51:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:38.185 04:51:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:38.185 04:51:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:38.185 04:51:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:38.185 04:51:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:38.185 04:51:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.186 04:51:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.186 04:51:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.186 04:51:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.186 04:51:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.186 04:51:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.186 04:51:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.186 "name": "raid_bdev1", 00:14:38.186 "uuid": "00429cd8-99da-4388-8596-1e0c6a014888", 00:14:38.186 "strip_size_kb": 64, 00:14:38.186 "state": "configuring", 00:14:38.186 "raid_level": "raid0", 00:14:38.186 "superblock": true, 00:14:38.186 "num_base_bdevs": 3, 00:14:38.186 "num_base_bdevs_discovered": 1, 00:14:38.186 "num_base_bdevs_operational": 3, 00:14:38.186 "base_bdevs_list": [ 00:14:38.186 { 00:14:38.186 "name": "pt1", 00:14:38.186 "uuid": "456c5a4b-5d97-5f9d-8272-12376ffd483b", 00:14:38.186 "is_configured": true, 00:14:38.186 "data_offset": 2048, 00:14:38.186 "data_size": 63488 00:14:38.186 }, 00:14:38.186 { 00:14:38.186 "name": null, 00:14:38.186 "uuid": "363f04a0-a42d-524b-bc00-23afcf75ca37", 00:14:38.186 "is_configured": false, 00:14:38.186 "data_offset": 2048, 00:14:38.186 "data_size": 63488 00:14:38.186 }, 00:14:38.186 { 00:14:38.186 "name": null, 00:14:38.186 "uuid": "06b08305-16f0-5c0f-999d-bd28cae99278", 00:14:38.186 "is_configured": false, 00:14:38.186 "data_offset": 2048, 00:14:38.186 "data_size": 63488 00:14:38.186 } 00:14:38.186 ] 00:14:38.186 }' 00:14:38.186 04:51:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.186 04:51:52 -- common/autotest_common.sh@10 -- # set +x 00:14:38.754 04:51:52 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:38.754 04:51:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:38.754 04:51:52 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:39.013 [2024-05-15 04:51:53.099459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:39.013 [2024-05-15 04:51:53.099548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.013 [2024-05-15 04:51:53.099616] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030c80 00:14:39.013 [2024-05-15 04:51:53.099642] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.013 [2024-05-15 04:51:53.100229] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.013 [2024-05-15 04:51:53.100267] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:39.013 [2024-05-15 04:51:53.100367] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:39.013 [2024-05-15 04:51:53.100389] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:39.013 pt2 00:14:39.013 04:51:53 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:39.013 04:51:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:39.013 04:51:53 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:39.272 [2024-05-15 04:51:53.303431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:39.272 [2024-05-15 04:51:53.303487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.272 [2024-05-15 04:51:53.303522] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032180 00:14:39.272 [2024-05-15 04:51:53.303548] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.272 [2024-05-15 04:51:53.304011] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.272 [2024-05-15 04:51:53.304053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:39.272 [2024-05-15 04:51:53.304146] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:39.272 [2024-05-15 04:51:53.304166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:39.272 [2024-05-15 04:51:53.304240] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002ee80 00:14:39.272 [2024-05-15 04:51:53.304249] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:39.272 [2024-05-15 04:51:53.304324] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:39.272 [2024-05-15 04:51:53.304531] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002ee80 00:14:39.272 [2024-05-15 04:51:53.304541] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002ee80 00:14:39.272 [2024-05-15 04:51:53.304636] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.272 pt3 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.272 04:51:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.531 04:51:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:39.531 "name": "raid_bdev1", 00:14:39.531 "uuid": "00429cd8-99da-4388-8596-1e0c6a014888", 00:14:39.531 "strip_size_kb": 64, 00:14:39.531 "state": "online", 00:14:39.531 "raid_level": "raid0", 00:14:39.531 "superblock": true, 00:14:39.531 "num_base_bdevs": 3, 00:14:39.531 "num_base_bdevs_discovered": 3, 00:14:39.531 "num_base_bdevs_operational": 3, 00:14:39.531 "base_bdevs_list": [ 00:14:39.531 { 00:14:39.531 "name": "pt1", 00:14:39.531 "uuid": "456c5a4b-5d97-5f9d-8272-12376ffd483b", 00:14:39.531 "is_configured": true, 00:14:39.531 "data_offset": 2048, 00:14:39.531 "data_size": 63488 00:14:39.531 }, 00:14:39.531 { 00:14:39.531 "name": "pt2", 00:14:39.531 "uuid": "363f04a0-a42d-524b-bc00-23afcf75ca37", 00:14:39.531 "is_configured": true, 00:14:39.531 "data_offset": 2048, 00:14:39.531 "data_size": 63488 00:14:39.531 }, 00:14:39.531 { 00:14:39.531 "name": "pt3", 00:14:39.531 "uuid": "06b08305-16f0-5c0f-999d-bd28cae99278", 00:14:39.531 "is_configured": true, 00:14:39.531 "data_offset": 2048, 00:14:39.531 "data_size": 63488 00:14:39.531 } 00:14:39.531 ] 00:14:39.531 }' 00:14:39.531 04:51:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:39.531 04:51:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.099 04:51:54 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:40.099 04:51:54 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:40.099 [2024-05-15 04:51:54.299639] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.099 04:51:54 -- bdev/bdev_raid.sh@430 -- # '[' 00429cd8-99da-4388-8596-1e0c6a014888 '!=' 00429cd8-99da-4388-8596-1e0c6a014888 ']' 00:14:40.099 04:51:54 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:40.099 04:51:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:40.099 04:51:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:40.099 04:51:54 -- bdev/bdev_raid.sh@511 -- # killprocess 50054 00:14:40.099 04:51:54 -- common/autotest_common.sh@926 -- # '[' -z 50054 ']' 00:14:40.099 04:51:54 -- common/autotest_common.sh@930 -- # kill -0 50054 00:14:40.099 04:51:54 -- common/autotest_common.sh@931 -- # uname 00:14:40.099 04:51:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:40.099 04:51:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 50054 00:14:40.358 killing process with pid 50054 00:14:40.358 04:51:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:40.358 04:51:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:40.358 04:51:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50054' 00:14:40.358 04:51:54 -- common/autotest_common.sh@945 -- # kill 50054 00:14:40.358 04:51:54 -- common/autotest_common.sh@950 -- # wait 50054 00:14:40.358 [2024-05-15 04:51:54.346394] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.358 [2024-05-15 04:51:54.346458] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.358 [2024-05-15 04:51:54.346497] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.358 [2024-05-15 04:51:54.346506] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002ee80 name raid_bdev1, state offline 00:14:40.618 [2024-05-15 04:51:54.641243] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.996 ************************************ 00:14:41.996 END TEST raid_superblock_test 00:14:41.996 ************************************ 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:41.996 00:14:41.996 real 0m10.814s 00:14:41.996 user 0m17.643s 00:14:41.996 sys 0m1.392s 00:14:41.996 04:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.996 04:51:56 -- common/autotest_common.sh@10 -- # set +x 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:14:41.996 04:51:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:41.996 04:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:41.996 04:51:56 -- common/autotest_common.sh@10 -- # set +x 00:14:41.996 ************************************ 00:14:41.996 START TEST raid_state_function_test 00:14:41.996 ************************************ 00:14:41.996 04:51:56 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=50376 00:14:41.996 Process raid pid: 50376 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50376' 00:14:41.996 04:51:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50376 /var/tmp/spdk-raid.sock 00:14:41.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:41.996 04:51:56 -- common/autotest_common.sh@819 -- # '[' -z 50376 ']' 00:14:41.996 04:51:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:41.996 04:51:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:41.996 04:51:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:41.996 04:51:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:41.996 04:51:56 -- common/autotest_common.sh@10 -- # set +x 00:14:42.255 [2024-05-15 04:51:56.303673] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:42.255 [2024-05-15 04:51:56.303913] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.255 [2024-05-15 04:51:56.481961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.823 [2024-05-15 04:51:56.752678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.823 [2024-05-15 04:51:57.020677] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.760 04:51:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:43.760 04:51:57 -- common/autotest_common.sh@852 -- # return 0 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:43.760 [2024-05-15 04:51:57.977879] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.760 [2024-05-15 04:51:57.977949] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.760 [2024-05-15 04:51:57.977961] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.760 [2024-05-15 04:51:57.977978] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.760 [2024-05-15 04:51:57.977985] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.760 [2024-05-15 04:51:57.978024] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:43.760 04:51:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.019 04:51:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.019 04:51:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.019 04:51:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.019 "name": "Existed_Raid", 00:14:44.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.019 "strip_size_kb": 64, 00:14:44.019 "state": "configuring", 00:14:44.019 "raid_level": "concat", 00:14:44.019 "superblock": false, 00:14:44.019 "num_base_bdevs": 3, 00:14:44.019 "num_base_bdevs_discovered": 0, 00:14:44.019 "num_base_bdevs_operational": 3, 00:14:44.019 "base_bdevs_list": [ 00:14:44.019 { 00:14:44.019 "name": "BaseBdev1", 00:14:44.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.019 "is_configured": false, 00:14:44.019 "data_offset": 0, 00:14:44.019 "data_size": 0 00:14:44.019 }, 00:14:44.019 { 00:14:44.019 "name": "BaseBdev2", 00:14:44.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.019 "is_configured": false, 00:14:44.019 "data_offset": 0, 00:14:44.020 "data_size": 0 00:14:44.020 }, 00:14:44.020 { 00:14:44.020 "name": "BaseBdev3", 00:14:44.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.020 "is_configured": false, 00:14:44.020 "data_offset": 0, 00:14:44.020 "data_size": 0 00:14:44.020 } 00:14:44.020 ] 00:14:44.020 }' 00:14:44.020 04:51:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.020 04:51:58 -- common/autotest_common.sh@10 -- # set +x 00:14:44.603 04:51:58 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:44.901 [2024-05-15 04:51:58.981897] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.901 [2024-05-15 04:51:58.981933] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:14:44.901 04:51:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:45.160 [2024-05-15 04:51:59.177988] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.160 [2024-05-15 04:51:59.178050] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.160 [2024-05-15 04:51:59.178060] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.160 [2024-05-15 04:51:59.178095] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.160 [2024-05-15 04:51:59.178102] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.160 [2024-05-15 04:51:59.178134] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.160 04:51:59 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.160 [2024-05-15 04:51:59.371415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.160 BaseBdev1 00:14:45.160 04:51:59 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:45.160 04:51:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:45.160 04:51:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:45.160 04:51:59 -- common/autotest_common.sh@889 -- # local i 00:14:45.160 04:51:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:45.160 04:51:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:45.160 04:51:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:45.420 04:51:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.678 [ 00:14:45.678 { 00:14:45.678 "name": "BaseBdev1", 00:14:45.679 "aliases": [ 00:14:45.679 "190ca11f-cd03-4122-9434-0cee8dc32aa2" 00:14:45.679 ], 00:14:45.679 "product_name": "Malloc disk", 00:14:45.679 "block_size": 512, 00:14:45.679 "num_blocks": 65536, 00:14:45.679 "uuid": "190ca11f-cd03-4122-9434-0cee8dc32aa2", 00:14:45.679 "assigned_rate_limits": { 00:14:45.679 "rw_ios_per_sec": 0, 00:14:45.679 "rw_mbytes_per_sec": 0, 00:14:45.679 "r_mbytes_per_sec": 0, 00:14:45.679 "w_mbytes_per_sec": 0 00:14:45.679 }, 00:14:45.679 "claimed": true, 00:14:45.679 "claim_type": "exclusive_write", 00:14:45.679 "zoned": false, 00:14:45.679 "supported_io_types": { 00:14:45.679 "read": true, 00:14:45.679 "write": true, 00:14:45.679 "unmap": true, 00:14:45.679 "write_zeroes": true, 00:14:45.679 "flush": true, 00:14:45.679 "reset": true, 00:14:45.679 "compare": false, 00:14:45.679 "compare_and_write": false, 00:14:45.679 "abort": true, 00:14:45.679 "nvme_admin": false, 00:14:45.679 "nvme_io": false 00:14:45.679 }, 00:14:45.679 "memory_domains": [ 00:14:45.679 { 00:14:45.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.679 "dma_device_type": 2 00:14:45.679 } 00:14:45.679 ], 00:14:45.679 "driver_specific": {} 00:14:45.679 } 00:14:45.679 ] 00:14:45.679 04:51:59 -- common/autotest_common.sh@895 -- # return 0 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.679 "name": "Existed_Raid", 00:14:45.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.679 "strip_size_kb": 64, 00:14:45.679 "state": "configuring", 00:14:45.679 "raid_level": "concat", 00:14:45.679 "superblock": false, 00:14:45.679 "num_base_bdevs": 3, 00:14:45.679 "num_base_bdevs_discovered": 1, 00:14:45.679 "num_base_bdevs_operational": 3, 00:14:45.679 "base_bdevs_list": [ 00:14:45.679 { 00:14:45.679 "name": "BaseBdev1", 00:14:45.679 "uuid": "190ca11f-cd03-4122-9434-0cee8dc32aa2", 00:14:45.679 "is_configured": true, 00:14:45.679 "data_offset": 0, 00:14:45.679 "data_size": 65536 00:14:45.679 }, 00:14:45.679 { 00:14:45.679 "name": "BaseBdev2", 00:14:45.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.679 "is_configured": false, 00:14:45.679 "data_offset": 0, 00:14:45.679 "data_size": 0 00:14:45.679 }, 00:14:45.679 { 00:14:45.679 "name": "BaseBdev3", 00:14:45.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.679 "is_configured": false, 00:14:45.679 "data_offset": 0, 00:14:45.679 "data_size": 0 00:14:45.679 } 00:14:45.679 ] 00:14:45.679 }' 00:14:45.679 04:51:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.679 04:51:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.246 04:52:00 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:46.505 [2024-05-15 04:52:00.683519] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:46.505 [2024-05-15 04:52:00.683569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:14:46.505 04:52:00 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:46.505 04:52:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:46.764 [2024-05-15 04:52:00.835624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.764 [2024-05-15 04:52:00.837099] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.764 [2024-05-15 04:52:00.837153] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.764 [2024-05-15 04:52:00.837165] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:46.764 [2024-05-15 04:52:00.837196] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.764 04:52:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.022 04:52:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.022 "name": "Existed_Raid", 00:14:47.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.022 "strip_size_kb": 64, 00:14:47.022 "state": "configuring", 00:14:47.022 "raid_level": "concat", 00:14:47.022 "superblock": false, 00:14:47.022 "num_base_bdevs": 3, 00:14:47.022 "num_base_bdevs_discovered": 1, 00:14:47.022 "num_base_bdevs_operational": 3, 00:14:47.022 "base_bdevs_list": [ 00:14:47.022 { 00:14:47.022 "name": "BaseBdev1", 00:14:47.022 "uuid": "190ca11f-cd03-4122-9434-0cee8dc32aa2", 00:14:47.022 "is_configured": true, 00:14:47.022 "data_offset": 0, 00:14:47.022 "data_size": 65536 00:14:47.022 }, 00:14:47.022 { 00:14:47.022 "name": "BaseBdev2", 00:14:47.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.022 "is_configured": false, 00:14:47.022 "data_offset": 0, 00:14:47.022 "data_size": 0 00:14:47.022 }, 00:14:47.022 { 00:14:47.022 "name": "BaseBdev3", 00:14:47.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.022 "is_configured": false, 00:14:47.022 "data_offset": 0, 00:14:47.022 "data_size": 0 00:14:47.022 } 00:14:47.022 ] 00:14:47.022 }' 00:14:47.022 04:52:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.022 04:52:00 -- common/autotest_common.sh@10 -- # set +x 00:14:47.589 04:52:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:47.847 [2024-05-15 04:52:01.834703] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.847 BaseBdev2 00:14:47.847 04:52:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:47.847 04:52:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:47.847 04:52:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:47.847 04:52:01 -- common/autotest_common.sh@889 -- # local i 00:14:47.847 04:52:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:47.847 04:52:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:47.847 04:52:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:47.847 04:52:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.105 [ 00:14:48.105 { 00:14:48.105 "name": "BaseBdev2", 00:14:48.105 "aliases": [ 00:14:48.105 "f889c898-2abf-47dd-91e8-f67249bb558e" 00:14:48.105 ], 00:14:48.105 "product_name": "Malloc disk", 00:14:48.105 "block_size": 512, 00:14:48.105 "num_blocks": 65536, 00:14:48.105 "uuid": "f889c898-2abf-47dd-91e8-f67249bb558e", 00:14:48.105 "assigned_rate_limits": { 00:14:48.105 "rw_ios_per_sec": 0, 00:14:48.105 "rw_mbytes_per_sec": 0, 00:14:48.105 "r_mbytes_per_sec": 0, 00:14:48.105 "w_mbytes_per_sec": 0 00:14:48.105 }, 00:14:48.105 "claimed": true, 00:14:48.105 "claim_type": "exclusive_write", 00:14:48.105 "zoned": false, 00:14:48.105 "supported_io_types": { 00:14:48.105 "read": true, 00:14:48.105 "write": true, 00:14:48.105 "unmap": true, 00:14:48.105 "write_zeroes": true, 00:14:48.105 "flush": true, 00:14:48.105 "reset": true, 00:14:48.105 "compare": false, 00:14:48.105 "compare_and_write": false, 00:14:48.105 "abort": true, 00:14:48.105 "nvme_admin": false, 00:14:48.105 "nvme_io": false 00:14:48.105 }, 00:14:48.105 "memory_domains": [ 00:14:48.105 { 00:14:48.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.105 "dma_device_type": 2 00:14:48.105 } 00:14:48.105 ], 00:14:48.105 "driver_specific": {} 00:14:48.105 } 00:14:48.105 ] 00:14:48.105 04:52:02 -- common/autotest_common.sh@895 -- # return 0 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.105 04:52:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.364 04:52:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.364 "name": "Existed_Raid", 00:14:48.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.364 "strip_size_kb": 64, 00:14:48.364 "state": "configuring", 00:14:48.364 "raid_level": "concat", 00:14:48.364 "superblock": false, 00:14:48.364 "num_base_bdevs": 3, 00:14:48.364 "num_base_bdevs_discovered": 2, 00:14:48.364 "num_base_bdevs_operational": 3, 00:14:48.364 "base_bdevs_list": [ 00:14:48.364 { 00:14:48.364 "name": "BaseBdev1", 00:14:48.364 "uuid": "190ca11f-cd03-4122-9434-0cee8dc32aa2", 00:14:48.364 "is_configured": true, 00:14:48.364 "data_offset": 0, 00:14:48.364 "data_size": 65536 00:14:48.364 }, 00:14:48.364 { 00:14:48.364 "name": "BaseBdev2", 00:14:48.364 "uuid": "f889c898-2abf-47dd-91e8-f67249bb558e", 00:14:48.364 "is_configured": true, 00:14:48.364 "data_offset": 0, 00:14:48.364 "data_size": 65536 00:14:48.364 }, 00:14:48.364 { 00:14:48.364 "name": "BaseBdev3", 00:14:48.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.364 "is_configured": false, 00:14:48.364 "data_offset": 0, 00:14:48.364 "data_size": 0 00:14:48.364 } 00:14:48.364 ] 00:14:48.364 }' 00:14:48.364 04:52:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.364 04:52:02 -- common/autotest_common.sh@10 -- # set +x 00:14:48.930 04:52:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:49.186 [2024-05-15 04:52:03.211975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.186 [2024-05-15 04:52:03.212025] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:14:49.186 [2024-05-15 04:52:03.212033] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:49.186 [2024-05-15 04:52:03.212128] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:49.186 [2024-05-15 04:52:03.212348] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:14:49.186 [2024-05-15 04:52:03.212359] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:14:49.186 [2024-05-15 04:52:03.212558] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.186 BaseBdev3 00:14:49.186 04:52:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:49.186 04:52:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:14:49.186 04:52:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:49.186 04:52:03 -- common/autotest_common.sh@889 -- # local i 00:14:49.186 04:52:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:49.186 04:52:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:49.186 04:52:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.443 04:52:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:49.443 [ 00:14:49.443 { 00:14:49.443 "name": "BaseBdev3", 00:14:49.443 "aliases": [ 00:14:49.443 "ff87c479-a841-44f3-894c-adf0d9cb93a6" 00:14:49.443 ], 00:14:49.443 "product_name": "Malloc disk", 00:14:49.443 "block_size": 512, 00:14:49.443 "num_blocks": 65536, 00:14:49.443 "uuid": "ff87c479-a841-44f3-894c-adf0d9cb93a6", 00:14:49.443 "assigned_rate_limits": { 00:14:49.443 "rw_ios_per_sec": 0, 00:14:49.443 "rw_mbytes_per_sec": 0, 00:14:49.443 "r_mbytes_per_sec": 0, 00:14:49.443 "w_mbytes_per_sec": 0 00:14:49.443 }, 00:14:49.443 "claimed": true, 00:14:49.443 "claim_type": "exclusive_write", 00:14:49.443 "zoned": false, 00:14:49.443 "supported_io_types": { 00:14:49.443 "read": true, 00:14:49.443 "write": true, 00:14:49.443 "unmap": true, 00:14:49.443 "write_zeroes": true, 00:14:49.443 "flush": true, 00:14:49.443 "reset": true, 00:14:49.443 "compare": false, 00:14:49.443 "compare_and_write": false, 00:14:49.443 "abort": true, 00:14:49.443 "nvme_admin": false, 00:14:49.443 "nvme_io": false 00:14:49.443 }, 00:14:49.443 "memory_domains": [ 00:14:49.443 { 00:14:49.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.443 "dma_device_type": 2 00:14:49.443 } 00:14:49.443 ], 00:14:49.443 "driver_specific": {} 00:14:49.443 } 00:14:49.443 ] 00:14:49.443 04:52:03 -- common/autotest_common.sh@895 -- # return 0 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.443 04:52:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.701 04:52:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.701 "name": "Existed_Raid", 00:14:49.701 "uuid": "d1e3d259-9d7c-43f6-9387-649f93df053b", 00:14:49.701 "strip_size_kb": 64, 00:14:49.701 "state": "online", 00:14:49.701 "raid_level": "concat", 00:14:49.701 "superblock": false, 00:14:49.701 "num_base_bdevs": 3, 00:14:49.701 "num_base_bdevs_discovered": 3, 00:14:49.701 "num_base_bdevs_operational": 3, 00:14:49.701 "base_bdevs_list": [ 00:14:49.701 { 00:14:49.701 "name": "BaseBdev1", 00:14:49.701 "uuid": "190ca11f-cd03-4122-9434-0cee8dc32aa2", 00:14:49.701 "is_configured": true, 00:14:49.701 "data_offset": 0, 00:14:49.701 "data_size": 65536 00:14:49.701 }, 00:14:49.701 { 00:14:49.701 "name": "BaseBdev2", 00:14:49.701 "uuid": "f889c898-2abf-47dd-91e8-f67249bb558e", 00:14:49.701 "is_configured": true, 00:14:49.701 "data_offset": 0, 00:14:49.701 "data_size": 65536 00:14:49.701 }, 00:14:49.701 { 00:14:49.701 "name": "BaseBdev3", 00:14:49.701 "uuid": "ff87c479-a841-44f3-894c-adf0d9cb93a6", 00:14:49.701 "is_configured": true, 00:14:49.701 "data_offset": 0, 00:14:49.701 "data_size": 65536 00:14:49.701 } 00:14:49.701 ] 00:14:49.701 }' 00:14:49.701 04:52:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.701 04:52:03 -- common/autotest_common.sh@10 -- # set +x 00:14:50.267 04:52:04 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:50.524 [2024-05-15 04:52:04.544180] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.524 [2024-05-15 04:52:04.544212] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.524 [2024-05-15 04:52:04.544268] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.524 04:52:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.781 04:52:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.781 "name": "Existed_Raid", 00:14:50.781 "uuid": "d1e3d259-9d7c-43f6-9387-649f93df053b", 00:14:50.781 "strip_size_kb": 64, 00:14:50.781 "state": "offline", 00:14:50.781 "raid_level": "concat", 00:14:50.781 "superblock": false, 00:14:50.781 "num_base_bdevs": 3, 00:14:50.781 "num_base_bdevs_discovered": 2, 00:14:50.781 "num_base_bdevs_operational": 2, 00:14:50.781 "base_bdevs_list": [ 00:14:50.781 { 00:14:50.781 "name": null, 00:14:50.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.781 "is_configured": false, 00:14:50.781 "data_offset": 0, 00:14:50.781 "data_size": 65536 00:14:50.781 }, 00:14:50.781 { 00:14:50.781 "name": "BaseBdev2", 00:14:50.781 "uuid": "f889c898-2abf-47dd-91e8-f67249bb558e", 00:14:50.781 "is_configured": true, 00:14:50.781 "data_offset": 0, 00:14:50.781 "data_size": 65536 00:14:50.781 }, 00:14:50.781 { 00:14:50.781 "name": "BaseBdev3", 00:14:50.781 "uuid": "ff87c479-a841-44f3-894c-adf0d9cb93a6", 00:14:50.781 "is_configured": true, 00:14:50.781 "data_offset": 0, 00:14:50.781 "data_size": 65536 00:14:50.781 } 00:14:50.781 ] 00:14:50.781 }' 00:14:50.781 04:52:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.781 04:52:04 -- common/autotest_common.sh@10 -- # set +x 00:14:51.347 04:52:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:51.347 04:52:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:51.347 04:52:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.347 04:52:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:51.605 04:52:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:51.605 04:52:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.605 04:52:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:51.605 [2024-05-15 04:52:05.760537] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.863 04:52:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:51.863 04:52:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:51.863 04:52:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.864 04:52:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:51.864 04:52:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:51.864 04:52:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.864 04:52:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:52.122 [2024-05-15 04:52:06.222356] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:52.122 [2024-05-15 04:52:06.222426] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:14:52.380 04:52:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:52.380 04:52:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:52.380 04:52:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.380 04:52:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:52.380 04:52:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:52.380 04:52:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:52.381 04:52:06 -- bdev/bdev_raid.sh@287 -- # killprocess 50376 00:14:52.381 04:52:06 -- common/autotest_common.sh@926 -- # '[' -z 50376 ']' 00:14:52.381 04:52:06 -- common/autotest_common.sh@930 -- # kill -0 50376 00:14:52.381 04:52:06 -- common/autotest_common.sh@931 -- # uname 00:14:52.381 04:52:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:52.381 04:52:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 50376 00:14:52.381 killing process with pid 50376 00:14:52.381 04:52:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:52.381 04:52:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:52.381 04:52:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50376' 00:14:52.381 04:52:06 -- common/autotest_common.sh@945 -- # kill 50376 00:14:52.381 04:52:06 -- common/autotest_common.sh@950 -- # wait 50376 00:14:52.381 [2024-05-15 04:52:06.594361] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.381 [2024-05-15 04:52:06.594521] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.286 ************************************ 00:14:54.286 END TEST raid_state_function_test 00:14:54.286 ************************************ 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:54.286 00:14:54.286 real 0m11.884s 00:14:54.286 user 0m19.639s 00:14:54.286 sys 0m1.597s 00:14:54.286 04:52:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.286 04:52:08 -- common/autotest_common.sh@10 -- # set +x 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:14:54.286 04:52:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:54.286 04:52:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:54.286 04:52:08 -- common/autotest_common.sh@10 -- # set +x 00:14:54.286 ************************************ 00:14:54.286 START TEST raid_state_function_test_sb 00:14:54.286 ************************************ 00:14:54.286 04:52:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:54.286 04:52:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:54.287 04:52:08 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:54.287 04:52:08 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:54.287 04:52:08 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:54.287 04:52:08 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:54.287 04:52:08 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:54.287 04:52:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=50756 00:14:54.287 Process raid pid: 50756 00:14:54.287 04:52:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50756' 00:14:54.287 04:52:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50756 /var/tmp/spdk-raid.sock 00:14:54.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:54.287 04:52:08 -- common/autotest_common.sh@819 -- # '[' -z 50756 ']' 00:14:54.287 04:52:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:54.287 04:52:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:54.287 04:52:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:54.287 04:52:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:54.287 04:52:08 -- common/autotest_common.sh@10 -- # set +x 00:14:54.287 04:52:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:54.287 [2024-05-15 04:52:08.250882] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:54.287 [2024-05-15 04:52:08.251091] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.287 [2024-05-15 04:52:08.425930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.545 [2024-05-15 04:52:08.706100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.805 [2024-05-15 04:52:08.972252] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.742 04:52:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:55.742 04:52:09 -- common/autotest_common.sh@852 -- # return 0 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:55.742 [2024-05-15 04:52:09.843458] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.742 [2024-05-15 04:52:09.843528] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.742 [2024-05-15 04:52:09.843538] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.742 [2024-05-15 04:52:09.843556] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.742 [2024-05-15 04:52:09.843563] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.742 [2024-05-15 04:52:09.843607] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.742 04:52:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.002 04:52:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:56.002 "name": "Existed_Raid", 00:14:56.002 "uuid": "d6d33cda-9213-4b9d-9f4c-e6279bf9b1b6", 00:14:56.002 "strip_size_kb": 64, 00:14:56.002 "state": "configuring", 00:14:56.002 "raid_level": "concat", 00:14:56.002 "superblock": true, 00:14:56.002 "num_base_bdevs": 3, 00:14:56.002 "num_base_bdevs_discovered": 0, 00:14:56.002 "num_base_bdevs_operational": 3, 00:14:56.002 "base_bdevs_list": [ 00:14:56.002 { 00:14:56.002 "name": "BaseBdev1", 00:14:56.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.002 "is_configured": false, 00:14:56.002 "data_offset": 0, 00:14:56.002 "data_size": 0 00:14:56.002 }, 00:14:56.002 { 00:14:56.002 "name": "BaseBdev2", 00:14:56.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.002 "is_configured": false, 00:14:56.002 "data_offset": 0, 00:14:56.002 "data_size": 0 00:14:56.002 }, 00:14:56.002 { 00:14:56.002 "name": "BaseBdev3", 00:14:56.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.002 "is_configured": false, 00:14:56.002 "data_offset": 0, 00:14:56.002 "data_size": 0 00:14:56.002 } 00:14:56.002 ] 00:14:56.002 }' 00:14:56.002 04:52:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:56.002 04:52:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.570 04:52:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:56.830 [2024-05-15 04:52:10.815407] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.830 [2024-05-15 04:52:10.815446] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:14:56.830 04:52:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:56.830 [2024-05-15 04:52:10.959501] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.830 [2024-05-15 04:52:10.959562] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.830 [2024-05-15 04:52:10.959572] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.830 [2024-05-15 04:52:10.959606] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.830 [2024-05-15 04:52:10.959613] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.830 [2024-05-15 04:52:10.959644] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.830 04:52:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:57.089 [2024-05-15 04:52:11.159356] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.089 BaseBdev1 00:14:57.089 04:52:11 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:57.089 04:52:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:57.089 04:52:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:57.089 04:52:11 -- common/autotest_common.sh@889 -- # local i 00:14:57.089 04:52:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:57.089 04:52:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:57.089 04:52:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:57.089 04:52:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:57.349 [ 00:14:57.349 { 00:14:57.349 "name": "BaseBdev1", 00:14:57.349 "aliases": [ 00:14:57.349 "83aa617f-47e8-4e73-935e-c520628a6901" 00:14:57.349 ], 00:14:57.349 "product_name": "Malloc disk", 00:14:57.349 "block_size": 512, 00:14:57.349 "num_blocks": 65536, 00:14:57.349 "uuid": "83aa617f-47e8-4e73-935e-c520628a6901", 00:14:57.349 "assigned_rate_limits": { 00:14:57.349 "rw_ios_per_sec": 0, 00:14:57.349 "rw_mbytes_per_sec": 0, 00:14:57.349 "r_mbytes_per_sec": 0, 00:14:57.349 "w_mbytes_per_sec": 0 00:14:57.349 }, 00:14:57.349 "claimed": true, 00:14:57.349 "claim_type": "exclusive_write", 00:14:57.349 "zoned": false, 00:14:57.349 "supported_io_types": { 00:14:57.349 "read": true, 00:14:57.349 "write": true, 00:14:57.349 "unmap": true, 00:14:57.349 "write_zeroes": true, 00:14:57.349 "flush": true, 00:14:57.349 "reset": true, 00:14:57.349 "compare": false, 00:14:57.349 "compare_and_write": false, 00:14:57.349 "abort": true, 00:14:57.349 "nvme_admin": false, 00:14:57.349 "nvme_io": false 00:14:57.349 }, 00:14:57.349 "memory_domains": [ 00:14:57.349 { 00:14:57.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.349 "dma_device_type": 2 00:14:57.349 } 00:14:57.349 ], 00:14:57.349 "driver_specific": {} 00:14:57.349 } 00:14:57.349 ] 00:14:57.349 04:52:11 -- common/autotest_common.sh@895 -- # return 0 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.349 04:52:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.609 04:52:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.609 "name": "Existed_Raid", 00:14:57.609 "uuid": "5a347777-9423-4873-9834-49845094cdd9", 00:14:57.609 "strip_size_kb": 64, 00:14:57.609 "state": "configuring", 00:14:57.609 "raid_level": "concat", 00:14:57.609 "superblock": true, 00:14:57.609 "num_base_bdevs": 3, 00:14:57.609 "num_base_bdevs_discovered": 1, 00:14:57.609 "num_base_bdevs_operational": 3, 00:14:57.609 "base_bdevs_list": [ 00:14:57.609 { 00:14:57.609 "name": "BaseBdev1", 00:14:57.609 "uuid": "83aa617f-47e8-4e73-935e-c520628a6901", 00:14:57.609 "is_configured": true, 00:14:57.609 "data_offset": 2048, 00:14:57.609 "data_size": 63488 00:14:57.609 }, 00:14:57.609 { 00:14:57.609 "name": "BaseBdev2", 00:14:57.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.609 "is_configured": false, 00:14:57.609 "data_offset": 0, 00:14:57.609 "data_size": 0 00:14:57.609 }, 00:14:57.609 { 00:14:57.609 "name": "BaseBdev3", 00:14:57.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.609 "is_configured": false, 00:14:57.609 "data_offset": 0, 00:14:57.609 "data_size": 0 00:14:57.609 } 00:14:57.609 ] 00:14:57.609 }' 00:14:57.609 04:52:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.609 04:52:11 -- common/autotest_common.sh@10 -- # set +x 00:14:58.193 04:52:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:58.193 [2024-05-15 04:52:12.407469] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.193 [2024-05-15 04:52:12.407519] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:14:58.501 04:52:12 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:58.501 04:52:12 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:58.501 04:52:12 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.772 BaseBdev1 00:14:58.772 04:52:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:58.772 04:52:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:58.772 04:52:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:58.772 04:52:12 -- common/autotest_common.sh@889 -- # local i 00:14:58.772 04:52:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:58.772 04:52:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:58.772 04:52:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:58.772 04:52:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:59.031 [ 00:14:59.031 { 00:14:59.031 "name": "BaseBdev1", 00:14:59.032 "aliases": [ 00:14:59.032 "82e1eb92-15d4-4f2d-a82e-0c4f2a842247" 00:14:59.032 ], 00:14:59.032 "product_name": "Malloc disk", 00:14:59.032 "block_size": 512, 00:14:59.032 "num_blocks": 65536, 00:14:59.032 "uuid": "82e1eb92-15d4-4f2d-a82e-0c4f2a842247", 00:14:59.032 "assigned_rate_limits": { 00:14:59.032 "rw_ios_per_sec": 0, 00:14:59.032 "rw_mbytes_per_sec": 0, 00:14:59.032 "r_mbytes_per_sec": 0, 00:14:59.032 "w_mbytes_per_sec": 0 00:14:59.032 }, 00:14:59.032 "claimed": false, 00:14:59.032 "zoned": false, 00:14:59.032 "supported_io_types": { 00:14:59.032 "read": true, 00:14:59.032 "write": true, 00:14:59.032 "unmap": true, 00:14:59.032 "write_zeroes": true, 00:14:59.032 "flush": true, 00:14:59.032 "reset": true, 00:14:59.032 "compare": false, 00:14:59.032 "compare_and_write": false, 00:14:59.032 "abort": true, 00:14:59.032 "nvme_admin": false, 00:14:59.032 "nvme_io": false 00:14:59.032 }, 00:14:59.032 "memory_domains": [ 00:14:59.032 { 00:14:59.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.032 "dma_device_type": 2 00:14:59.032 } 00:14:59.032 ], 00:14:59.032 "driver_specific": {} 00:14:59.032 } 00:14:59.032 ] 00:14:59.032 04:52:13 -- common/autotest_common.sh@895 -- # return 0 00:14:59.032 04:52:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:59.290 [2024-05-15 04:52:13.310961] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.290 [2024-05-15 04:52:13.312190] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.290 [2024-05-15 04:52:13.312240] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.290 [2024-05-15 04:52:13.312250] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:59.290 [2024-05-15 04:52:13.312271] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:59.290 "name": "Existed_Raid", 00:14:59.290 "uuid": "51a188fd-c989-43a6-8d41-61a7426641a9", 00:14:59.290 "strip_size_kb": 64, 00:14:59.290 "state": "configuring", 00:14:59.290 "raid_level": "concat", 00:14:59.290 "superblock": true, 00:14:59.290 "num_base_bdevs": 3, 00:14:59.290 "num_base_bdevs_discovered": 1, 00:14:59.290 "num_base_bdevs_operational": 3, 00:14:59.290 "base_bdevs_list": [ 00:14:59.290 { 00:14:59.290 "name": "BaseBdev1", 00:14:59.290 "uuid": "82e1eb92-15d4-4f2d-a82e-0c4f2a842247", 00:14:59.290 "is_configured": true, 00:14:59.290 "data_offset": 2048, 00:14:59.290 "data_size": 63488 00:14:59.290 }, 00:14:59.290 { 00:14:59.290 "name": "BaseBdev2", 00:14:59.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.290 "is_configured": false, 00:14:59.290 "data_offset": 0, 00:14:59.290 "data_size": 0 00:14:59.290 }, 00:14:59.290 { 00:14:59.290 "name": "BaseBdev3", 00:14:59.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.290 "is_configured": false, 00:14:59.290 "data_offset": 0, 00:14:59.290 "data_size": 0 00:14:59.290 } 00:14:59.290 ] 00:14:59.290 }' 00:14:59.290 04:52:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:59.290 04:52:13 -- common/autotest_common.sh@10 -- # set +x 00:14:59.856 04:52:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:00.115 [2024-05-15 04:52:14.229012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.115 BaseBdev2 00:15:00.115 04:52:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:00.115 04:52:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:00.115 04:52:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:00.115 04:52:14 -- common/autotest_common.sh@889 -- # local i 00:15:00.115 04:52:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:00.115 04:52:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:00.116 04:52:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:00.375 04:52:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:00.375 [ 00:15:00.375 { 00:15:00.375 "name": "BaseBdev2", 00:15:00.376 "aliases": [ 00:15:00.376 "2e7c1b15-048f-479f-a8e4-88d75fb2a3d6" 00:15:00.376 ], 00:15:00.376 "product_name": "Malloc disk", 00:15:00.376 "block_size": 512, 00:15:00.376 "num_blocks": 65536, 00:15:00.376 "uuid": "2e7c1b15-048f-479f-a8e4-88d75fb2a3d6", 00:15:00.376 "assigned_rate_limits": { 00:15:00.376 "rw_ios_per_sec": 0, 00:15:00.376 "rw_mbytes_per_sec": 0, 00:15:00.376 "r_mbytes_per_sec": 0, 00:15:00.376 "w_mbytes_per_sec": 0 00:15:00.376 }, 00:15:00.376 "claimed": true, 00:15:00.376 "claim_type": "exclusive_write", 00:15:00.376 "zoned": false, 00:15:00.376 "supported_io_types": { 00:15:00.376 "read": true, 00:15:00.376 "write": true, 00:15:00.376 "unmap": true, 00:15:00.376 "write_zeroes": true, 00:15:00.376 "flush": true, 00:15:00.376 "reset": true, 00:15:00.376 "compare": false, 00:15:00.376 "compare_and_write": false, 00:15:00.376 "abort": true, 00:15:00.376 "nvme_admin": false, 00:15:00.376 "nvme_io": false 00:15:00.376 }, 00:15:00.376 "memory_domains": [ 00:15:00.376 { 00:15:00.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.376 "dma_device_type": 2 00:15:00.376 } 00:15:00.376 ], 00:15:00.376 "driver_specific": {} 00:15:00.376 } 00:15:00.376 ] 00:15:00.376 04:52:14 -- common/autotest_common.sh@895 -- # return 0 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:00.376 04:52:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:00.637 04:52:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.637 04:52:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.637 04:52:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.637 "name": "Existed_Raid", 00:15:00.637 "uuid": "51a188fd-c989-43a6-8d41-61a7426641a9", 00:15:00.637 "strip_size_kb": 64, 00:15:00.637 "state": "configuring", 00:15:00.637 "raid_level": "concat", 00:15:00.637 "superblock": true, 00:15:00.637 "num_base_bdevs": 3, 00:15:00.637 "num_base_bdevs_discovered": 2, 00:15:00.637 "num_base_bdevs_operational": 3, 00:15:00.637 "base_bdevs_list": [ 00:15:00.637 { 00:15:00.637 "name": "BaseBdev1", 00:15:00.637 "uuid": "82e1eb92-15d4-4f2d-a82e-0c4f2a842247", 00:15:00.637 "is_configured": true, 00:15:00.637 "data_offset": 2048, 00:15:00.637 "data_size": 63488 00:15:00.637 }, 00:15:00.637 { 00:15:00.637 "name": "BaseBdev2", 00:15:00.637 "uuid": "2e7c1b15-048f-479f-a8e4-88d75fb2a3d6", 00:15:00.637 "is_configured": true, 00:15:00.637 "data_offset": 2048, 00:15:00.637 "data_size": 63488 00:15:00.637 }, 00:15:00.637 { 00:15:00.637 "name": "BaseBdev3", 00:15:00.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.637 "is_configured": false, 00:15:00.637 "data_offset": 0, 00:15:00.637 "data_size": 0 00:15:00.637 } 00:15:00.637 ] 00:15:00.637 }' 00:15:00.637 04:52:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.637 04:52:14 -- common/autotest_common.sh@10 -- # set +x 00:15:01.206 04:52:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:01.466 [2024-05-15 04:52:15.490554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.466 BaseBdev3 00:15:01.466 [2024-05-15 04:52:15.490963] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:15:01.466 [2024-05-15 04:52:15.490985] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:01.466 [2024-05-15 04:52:15.491098] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:01.466 [2024-05-15 04:52:15.491309] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:15:01.466 [2024-05-15 04:52:15.491319] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:15:01.466 [2024-05-15 04:52:15.491421] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.466 04:52:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:01.466 04:52:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:01.466 04:52:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:01.466 04:52:15 -- common/autotest_common.sh@889 -- # local i 00:15:01.466 04:52:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:01.466 04:52:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:01.466 04:52:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.466 04:52:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:01.726 [ 00:15:01.726 { 00:15:01.726 "name": "BaseBdev3", 00:15:01.726 "aliases": [ 00:15:01.726 "df3a11bf-baf9-46a8-a12d-4943a8759822" 00:15:01.726 ], 00:15:01.726 "product_name": "Malloc disk", 00:15:01.726 "block_size": 512, 00:15:01.726 "num_blocks": 65536, 00:15:01.726 "uuid": "df3a11bf-baf9-46a8-a12d-4943a8759822", 00:15:01.726 "assigned_rate_limits": { 00:15:01.726 "rw_ios_per_sec": 0, 00:15:01.726 "rw_mbytes_per_sec": 0, 00:15:01.726 "r_mbytes_per_sec": 0, 00:15:01.726 "w_mbytes_per_sec": 0 00:15:01.726 }, 00:15:01.726 "claimed": true, 00:15:01.726 "claim_type": "exclusive_write", 00:15:01.726 "zoned": false, 00:15:01.726 "supported_io_types": { 00:15:01.726 "read": true, 00:15:01.726 "write": true, 00:15:01.726 "unmap": true, 00:15:01.726 "write_zeroes": true, 00:15:01.726 "flush": true, 00:15:01.726 "reset": true, 00:15:01.726 "compare": false, 00:15:01.726 "compare_and_write": false, 00:15:01.726 "abort": true, 00:15:01.726 "nvme_admin": false, 00:15:01.726 "nvme_io": false 00:15:01.726 }, 00:15:01.726 "memory_domains": [ 00:15:01.726 { 00:15:01.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.726 "dma_device_type": 2 00:15:01.726 } 00:15:01.726 ], 00:15:01.726 "driver_specific": {} 00:15:01.726 } 00:15:01.726 ] 00:15:01.726 04:52:15 -- common/autotest_common.sh@895 -- # return 0 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.726 04:52:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.986 04:52:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.986 "name": "Existed_Raid", 00:15:01.986 "uuid": "51a188fd-c989-43a6-8d41-61a7426641a9", 00:15:01.986 "strip_size_kb": 64, 00:15:01.986 "state": "online", 00:15:01.986 "raid_level": "concat", 00:15:01.986 "superblock": true, 00:15:01.986 "num_base_bdevs": 3, 00:15:01.986 "num_base_bdevs_discovered": 3, 00:15:01.986 "num_base_bdevs_operational": 3, 00:15:01.986 "base_bdevs_list": [ 00:15:01.986 { 00:15:01.986 "name": "BaseBdev1", 00:15:01.986 "uuid": "82e1eb92-15d4-4f2d-a82e-0c4f2a842247", 00:15:01.986 "is_configured": true, 00:15:01.986 "data_offset": 2048, 00:15:01.986 "data_size": 63488 00:15:01.986 }, 00:15:01.986 { 00:15:01.986 "name": "BaseBdev2", 00:15:01.986 "uuid": "2e7c1b15-048f-479f-a8e4-88d75fb2a3d6", 00:15:01.986 "is_configured": true, 00:15:01.986 "data_offset": 2048, 00:15:01.986 "data_size": 63488 00:15:01.986 }, 00:15:01.986 { 00:15:01.986 "name": "BaseBdev3", 00:15:01.986 "uuid": "df3a11bf-baf9-46a8-a12d-4943a8759822", 00:15:01.986 "is_configured": true, 00:15:01.986 "data_offset": 2048, 00:15:01.986 "data_size": 63488 00:15:01.986 } 00:15:01.986 ] 00:15:01.986 }' 00:15:01.986 04:52:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.986 04:52:16 -- common/autotest_common.sh@10 -- # set +x 00:15:02.555 04:52:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:02.555 [2024-05-15 04:52:16.778858] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.555 [2024-05-15 04:52:16.778892] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.555 [2024-05-15 04:52:16.778937] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.814 04:52:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.073 04:52:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.073 "name": "Existed_Raid", 00:15:03.073 "uuid": "51a188fd-c989-43a6-8d41-61a7426641a9", 00:15:03.073 "strip_size_kb": 64, 00:15:03.073 "state": "offline", 00:15:03.073 "raid_level": "concat", 00:15:03.073 "superblock": true, 00:15:03.073 "num_base_bdevs": 3, 00:15:03.073 "num_base_bdevs_discovered": 2, 00:15:03.073 "num_base_bdevs_operational": 2, 00:15:03.073 "base_bdevs_list": [ 00:15:03.073 { 00:15:03.073 "name": null, 00:15:03.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.073 "is_configured": false, 00:15:03.073 "data_offset": 2048, 00:15:03.073 "data_size": 63488 00:15:03.073 }, 00:15:03.073 { 00:15:03.073 "name": "BaseBdev2", 00:15:03.073 "uuid": "2e7c1b15-048f-479f-a8e4-88d75fb2a3d6", 00:15:03.073 "is_configured": true, 00:15:03.073 "data_offset": 2048, 00:15:03.073 "data_size": 63488 00:15:03.073 }, 00:15:03.073 { 00:15:03.073 "name": "BaseBdev3", 00:15:03.073 "uuid": "df3a11bf-baf9-46a8-a12d-4943a8759822", 00:15:03.073 "is_configured": true, 00:15:03.073 "data_offset": 2048, 00:15:03.073 "data_size": 63488 00:15:03.073 } 00:15:03.073 ] 00:15:03.073 }' 00:15:03.073 04:52:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.073 04:52:17 -- common/autotest_common.sh@10 -- # set +x 00:15:03.641 04:52:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:03.641 04:52:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:03.641 04:52:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.641 04:52:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:03.901 04:52:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:03.901 04:52:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:03.901 04:52:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:03.901 [2024-05-15 04:52:18.019039] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.901 04:52:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:03.901 04:52:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:04.160 04:52:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.160 04:52:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:04.160 04:52:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:04.160 04:52:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.160 04:52:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:04.419 [2024-05-15 04:52:18.473639] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:04.419 [2024-05-15 04:52:18.473686] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:15:04.419 04:52:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:04.419 04:52:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:04.420 04:52:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.420 04:52:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:04.679 04:52:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:04.679 04:52:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:04.679 04:52:18 -- bdev/bdev_raid.sh@287 -- # killprocess 50756 00:15:04.679 04:52:18 -- common/autotest_common.sh@926 -- # '[' -z 50756 ']' 00:15:04.679 04:52:18 -- common/autotest_common.sh@930 -- # kill -0 50756 00:15:04.679 04:52:18 -- common/autotest_common.sh@931 -- # uname 00:15:04.679 04:52:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:04.679 04:52:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 50756 00:15:04.679 killing process with pid 50756 00:15:04.679 04:52:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:04.679 04:52:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:04.679 04:52:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50756' 00:15:04.679 04:52:18 -- common/autotest_common.sh@945 -- # kill 50756 00:15:04.679 04:52:18 -- common/autotest_common.sh@950 -- # wait 50756 00:15:04.679 [2024-05-15 04:52:18.768620] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.679 [2024-05-15 04:52:18.768762] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:06.057 ************************************ 00:15:06.057 END TEST raid_state_function_test_sb 00:15:06.057 ************************************ 00:15:06.057 00:15:06.057 real 0m12.115s 00:15:06.057 user 0m20.079s 00:15:06.057 sys 0m1.577s 00:15:06.057 04:52:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.057 04:52:20 -- common/autotest_common.sh@10 -- # set +x 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:06.057 04:52:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:06.057 04:52:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:06.057 04:52:20 -- common/autotest_common.sh@10 -- # set +x 00:15:06.057 ************************************ 00:15:06.057 START TEST raid_superblock_test 00:15:06.057 ************************************ 00:15:06.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:06.057 04:52:20 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@357 -- # raid_pid=51146 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@358 -- # waitforlisten 51146 /var/tmp/spdk-raid.sock 00:15:06.057 04:52:20 -- common/autotest_common.sh@819 -- # '[' -z 51146 ']' 00:15:06.057 04:52:20 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:06.057 04:52:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:06.057 04:52:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:06.057 04:52:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:06.057 04:52:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:06.057 04:52:20 -- common/autotest_common.sh@10 -- # set +x 00:15:06.317 [2024-05-15 04:52:20.424552] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:06.317 [2024-05-15 04:52:20.425090] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51146 ] 00:15:06.577 [2024-05-15 04:52:20.615183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.835 [2024-05-15 04:52:20.893414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.093 [2024-05-15 04:52:21.161546] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.028 04:52:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:08.028 04:52:21 -- common/autotest_common.sh@852 -- # return 0 00:15:08.028 04:52:21 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:08.028 04:52:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:08.028 04:52:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:08.028 04:52:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:08.028 04:52:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:08.028 04:52:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.028 04:52:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.028 04:52:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.028 04:52:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:08.028 malloc1 00:15:08.028 04:52:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:08.286 [2024-05-15 04:52:22.329593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:08.286 [2024-05-15 04:52:22.329674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.286 [2024-05-15 04:52:22.329900] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:15:08.286 [2024-05-15 04:52:22.329960] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.286 [2024-05-15 04:52:22.331704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.286 [2024-05-15 04:52:22.331761] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:08.286 pt1 00:15:08.286 04:52:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:08.286 04:52:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:08.286 04:52:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:08.286 04:52:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:08.286 04:52:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:08.286 04:52:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.286 04:52:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.286 04:52:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.286 04:52:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:08.544 malloc2 00:15:08.544 04:52:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.544 [2024-05-15 04:52:22.661412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.544 [2024-05-15 04:52:22.661483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.544 [2024-05-15 04:52:22.661525] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:15:08.544 [2024-05-15 04:52:22.661563] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.544 [2024-05-15 04:52:22.663093] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.544 [2024-05-15 04:52:22.663135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.544 pt2 00:15:08.544 04:52:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:08.544 04:52:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:08.544 04:52:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:08.544 04:52:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:08.544 04:52:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:08.544 04:52:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.545 04:52:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.545 04:52:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.545 04:52:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:08.803 malloc3 00:15:08.803 04:52:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:08.803 [2024-05-15 04:52:22.984850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:08.803 [2024-05-15 04:52:22.984923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.803 [2024-05-15 04:52:22.984986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:15:08.803 [2024-05-15 04:52:22.985026] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.803 [2024-05-15 04:52:22.986904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.803 [2024-05-15 04:52:22.986953] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:08.803 pt3 00:15:08.803 04:52:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:08.803 04:52:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:08.803 04:52:22 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:09.060 [2024-05-15 04:52:23.180937] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.060 [2024-05-15 04:52:23.182175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.060 [2024-05-15 04:52:23.182215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.060 [2024-05-15 04:52:23.182309] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002c180 00:15:09.060 [2024-05-15 04:52:23.182319] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:09.060 [2024-05-15 04:52:23.182420] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:09.060 [2024-05-15 04:52:23.182625] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002c180 00:15:09.060 [2024-05-15 04:52:23.182635] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002c180 00:15:09.060 [2024-05-15 04:52:23.182748] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.060 04:52:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.317 04:52:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.317 "name": "raid_bdev1", 00:15:09.317 "uuid": "1d2aa8d5-3acc-4e1f-a849-f6220e206956", 00:15:09.317 "strip_size_kb": 64, 00:15:09.317 "state": "online", 00:15:09.317 "raid_level": "concat", 00:15:09.317 "superblock": true, 00:15:09.317 "num_base_bdevs": 3, 00:15:09.317 "num_base_bdevs_discovered": 3, 00:15:09.317 "num_base_bdevs_operational": 3, 00:15:09.317 "base_bdevs_list": [ 00:15:09.317 { 00:15:09.317 "name": "pt1", 00:15:09.317 "uuid": "5b3bbfa3-5417-5d27-bc60-f53edeb29243", 00:15:09.317 "is_configured": true, 00:15:09.317 "data_offset": 2048, 00:15:09.317 "data_size": 63488 00:15:09.317 }, 00:15:09.317 { 00:15:09.317 "name": "pt2", 00:15:09.317 "uuid": "d16fce87-26b2-558e-b5fc-de511657b8da", 00:15:09.317 "is_configured": true, 00:15:09.317 "data_offset": 2048, 00:15:09.317 "data_size": 63488 00:15:09.317 }, 00:15:09.317 { 00:15:09.317 "name": "pt3", 00:15:09.317 "uuid": "bd3315c5-a087-5537-9bd1-4999ce518d6d", 00:15:09.317 "is_configured": true, 00:15:09.317 "data_offset": 2048, 00:15:09.317 "data_size": 63488 00:15:09.317 } 00:15:09.317 ] 00:15:09.317 }' 00:15:09.317 04:52:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.317 04:52:23 -- common/autotest_common.sh@10 -- # set +x 00:15:09.883 04:52:23 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:09.883 04:52:23 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:10.141 [2024-05-15 04:52:24.149089] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.141 04:52:24 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1d2aa8d5-3acc-4e1f-a849-f6220e206956 00:15:10.141 04:52:24 -- bdev/bdev_raid.sh@380 -- # '[' -z 1d2aa8d5-3acc-4e1f-a849-f6220e206956 ']' 00:15:10.141 04:52:24 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:10.141 [2024-05-15 04:52:24.369006] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.141 [2024-05-15 04:52:24.369033] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.141 [2024-05-15 04:52:24.369100] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.141 [2024-05-15 04:52:24.369144] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.141 [2024-05-15 04:52:24.369153] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c180 name raid_bdev1, state offline 00:15:10.399 04:52:24 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:10.399 04:52:24 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.399 04:52:24 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:10.399 04:52:24 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:10.399 04:52:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:10.399 04:52:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:10.657 04:52:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:10.657 04:52:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:10.915 04:52:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:10.915 04:52:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:10.915 04:52:25 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:10.915 04:52:25 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:11.174 04:52:25 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:11.174 04:52:25 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:11.174 04:52:25 -- common/autotest_common.sh@640 -- # local es=0 00:15:11.174 04:52:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:11.174 04:52:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.174 04:52:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:11.174 04:52:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.174 04:52:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:11.174 04:52:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.174 04:52:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:11.174 04:52:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.174 04:52:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:11.174 04:52:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:11.432 [2024-05-15 04:52:25.469114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:11.432 [2024-05-15 04:52:25.470428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:11.432 [2024-05-15 04:52:25.470463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:11.432 [2024-05-15 04:52:25.470497] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:11.432 [2024-05-15 04:52:25.470558] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:11.432 [2024-05-15 04:52:25.470588] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:11.432 [2024-05-15 04:52:25.470625] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.432 [2024-05-15 04:52:25.470636] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c780 name raid_bdev1, state configuring 00:15:11.432 request: 00:15:11.432 { 00:15:11.432 "name": "raid_bdev1", 00:15:11.432 "raid_level": "concat", 00:15:11.432 "base_bdevs": [ 00:15:11.432 "malloc1", 00:15:11.432 "malloc2", 00:15:11.432 "malloc3" 00:15:11.432 ], 00:15:11.432 "superblock": false, 00:15:11.432 "strip_size_kb": 64, 00:15:11.432 "method": "bdev_raid_create", 00:15:11.432 "req_id": 1 00:15:11.432 } 00:15:11.432 Got JSON-RPC error response 00:15:11.432 response: 00:15:11.432 { 00:15:11.432 "code": -17, 00:15:11.432 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:11.432 } 00:15:11.432 04:52:25 -- common/autotest_common.sh@643 -- # es=1 00:15:11.432 04:52:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:11.432 04:52:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:11.432 04:52:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:11.432 04:52:25 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:11.432 04:52:25 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:11.702 [2024-05-15 04:52:25.837130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:11.702 [2024-05-15 04:52:25.837196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.702 [2024-05-15 04:52:25.837251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:15:11.702 [2024-05-15 04:52:25.837275] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.702 [2024-05-15 04:52:25.839212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.702 [2024-05-15 04:52:25.839248] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:11.702 [2024-05-15 04:52:25.839352] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:11.702 [2024-05-15 04:52:25.839400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:11.702 pt1 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.702 04:52:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.974 04:52:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.974 "name": "raid_bdev1", 00:15:11.974 "uuid": "1d2aa8d5-3acc-4e1f-a849-f6220e206956", 00:15:11.974 "strip_size_kb": 64, 00:15:11.974 "state": "configuring", 00:15:11.974 "raid_level": "concat", 00:15:11.974 "superblock": true, 00:15:11.974 "num_base_bdevs": 3, 00:15:11.974 "num_base_bdevs_discovered": 1, 00:15:11.974 "num_base_bdevs_operational": 3, 00:15:11.974 "base_bdevs_list": [ 00:15:11.974 { 00:15:11.974 "name": "pt1", 00:15:11.974 "uuid": "5b3bbfa3-5417-5d27-bc60-f53edeb29243", 00:15:11.974 "is_configured": true, 00:15:11.974 "data_offset": 2048, 00:15:11.974 "data_size": 63488 00:15:11.974 }, 00:15:11.974 { 00:15:11.974 "name": null, 00:15:11.974 "uuid": "d16fce87-26b2-558e-b5fc-de511657b8da", 00:15:11.974 "is_configured": false, 00:15:11.974 "data_offset": 2048, 00:15:11.974 "data_size": 63488 00:15:11.974 }, 00:15:11.974 { 00:15:11.974 "name": null, 00:15:11.974 "uuid": "bd3315c5-a087-5537-9bd1-4999ce518d6d", 00:15:11.974 "is_configured": false, 00:15:11.974 "data_offset": 2048, 00:15:11.974 "data_size": 63488 00:15:11.974 } 00:15:11.974 ] 00:15:11.974 }' 00:15:11.974 04:52:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.974 04:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:12.542 04:52:26 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:12.542 04:52:26 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:12.799 [2024-05-15 04:52:26.809248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:12.800 [2024-05-15 04:52:26.809340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.800 [2024-05-15 04:52:26.809394] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f480 00:15:12.800 [2024-05-15 04:52:26.809416] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.800 [2024-05-15 04:52:26.809950] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.800 [2024-05-15 04:52:26.809986] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:12.800 [2024-05-15 04:52:26.810085] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:12.800 [2024-05-15 04:52:26.810110] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.800 pt2 00:15:12.800 04:52:26 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:13.056 [2024-05-15 04:52:27.033286] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.056 04:52:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.057 04:52:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.057 "name": "raid_bdev1", 00:15:13.057 "uuid": "1d2aa8d5-3acc-4e1f-a849-f6220e206956", 00:15:13.057 "strip_size_kb": 64, 00:15:13.057 "state": "configuring", 00:15:13.057 "raid_level": "concat", 00:15:13.057 "superblock": true, 00:15:13.057 "num_base_bdevs": 3, 00:15:13.057 "num_base_bdevs_discovered": 1, 00:15:13.057 "num_base_bdevs_operational": 3, 00:15:13.057 "base_bdevs_list": [ 00:15:13.057 { 00:15:13.057 "name": "pt1", 00:15:13.057 "uuid": "5b3bbfa3-5417-5d27-bc60-f53edeb29243", 00:15:13.057 "is_configured": true, 00:15:13.057 "data_offset": 2048, 00:15:13.057 "data_size": 63488 00:15:13.057 }, 00:15:13.057 { 00:15:13.057 "name": null, 00:15:13.057 "uuid": "d16fce87-26b2-558e-b5fc-de511657b8da", 00:15:13.057 "is_configured": false, 00:15:13.057 "data_offset": 2048, 00:15:13.057 "data_size": 63488 00:15:13.057 }, 00:15:13.057 { 00:15:13.057 "name": null, 00:15:13.057 "uuid": "bd3315c5-a087-5537-9bd1-4999ce518d6d", 00:15:13.057 "is_configured": false, 00:15:13.057 "data_offset": 2048, 00:15:13.057 "data_size": 63488 00:15:13.057 } 00:15:13.057 ] 00:15:13.057 }' 00:15:13.057 04:52:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.057 04:52:27 -- common/autotest_common.sh@10 -- # set +x 00:15:13.623 04:52:27 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:13.623 04:52:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:13.623 04:52:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.881 [2024-05-15 04:52:27.945404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.881 [2024-05-15 04:52:27.945496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.881 [2024-05-15 04:52:27.945551] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030c80 00:15:13.881 [2024-05-15 04:52:27.945578] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.881 [2024-05-15 04:52:27.946117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.881 [2024-05-15 04:52:27.946155] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.881 [2024-05-15 04:52:27.946257] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:13.881 [2024-05-15 04:52:27.946280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.881 pt2 00:15:13.881 04:52:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:13.881 04:52:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:13.881 04:52:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:13.881 [2024-05-15 04:52:28.101378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:13.881 [2024-05-15 04:52:28.101435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.881 [2024-05-15 04:52:28.101486] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032180 00:15:13.881 [2024-05-15 04:52:28.101511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.881 [2024-05-15 04:52:28.101971] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.881 [2024-05-15 04:52:28.102012] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:13.881 [2024-05-15 04:52:28.102114] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:13.881 [2024-05-15 04:52:28.102136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:13.881 [2024-05-15 04:52:28.102208] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002ee80 00:15:13.881 [2024-05-15 04:52:28.102218] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:13.881 [2024-05-15 04:52:28.102301] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:13.881 [2024-05-15 04:52:28.102472] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002ee80 00:15:13.881 [2024-05-15 04:52:28.102482] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002ee80 00:15:13.881 [2024-05-15 04:52:28.102564] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.881 pt3 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.141 "name": "raid_bdev1", 00:15:14.141 "uuid": "1d2aa8d5-3acc-4e1f-a849-f6220e206956", 00:15:14.141 "strip_size_kb": 64, 00:15:14.141 "state": "online", 00:15:14.141 "raid_level": "concat", 00:15:14.141 "superblock": true, 00:15:14.141 "num_base_bdevs": 3, 00:15:14.141 "num_base_bdevs_discovered": 3, 00:15:14.141 "num_base_bdevs_operational": 3, 00:15:14.141 "base_bdevs_list": [ 00:15:14.141 { 00:15:14.141 "name": "pt1", 00:15:14.141 "uuid": "5b3bbfa3-5417-5d27-bc60-f53edeb29243", 00:15:14.141 "is_configured": true, 00:15:14.141 "data_offset": 2048, 00:15:14.141 "data_size": 63488 00:15:14.141 }, 00:15:14.141 { 00:15:14.141 "name": "pt2", 00:15:14.141 "uuid": "d16fce87-26b2-558e-b5fc-de511657b8da", 00:15:14.141 "is_configured": true, 00:15:14.141 "data_offset": 2048, 00:15:14.141 "data_size": 63488 00:15:14.141 }, 00:15:14.141 { 00:15:14.141 "name": "pt3", 00:15:14.141 "uuid": "bd3315c5-a087-5537-9bd1-4999ce518d6d", 00:15:14.141 "is_configured": true, 00:15:14.141 "data_offset": 2048, 00:15:14.141 "data_size": 63488 00:15:14.141 } 00:15:14.141 ] 00:15:14.141 }' 00:15:14.141 04:52:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.141 04:52:28 -- common/autotest_common.sh@10 -- # set +x 00:15:14.709 04:52:28 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:14.709 04:52:28 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:14.968 [2024-05-15 04:52:29.113715] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.968 04:52:29 -- bdev/bdev_raid.sh@430 -- # '[' 1d2aa8d5-3acc-4e1f-a849-f6220e206956 '!=' 1d2aa8d5-3acc-4e1f-a849-f6220e206956 ']' 00:15:14.968 04:52:29 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:14.968 04:52:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:14.968 04:52:29 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:14.968 04:52:29 -- bdev/bdev_raid.sh@511 -- # killprocess 51146 00:15:14.968 04:52:29 -- common/autotest_common.sh@926 -- # '[' -z 51146 ']' 00:15:14.968 04:52:29 -- common/autotest_common.sh@930 -- # kill -0 51146 00:15:14.968 04:52:29 -- common/autotest_common.sh@931 -- # uname 00:15:14.968 04:52:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:14.968 04:52:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 51146 00:15:14.968 killing process with pid 51146 00:15:14.968 04:52:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:14.968 04:52:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:14.968 04:52:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51146' 00:15:14.968 04:52:29 -- common/autotest_common.sh@945 -- # kill 51146 00:15:14.968 [2024-05-15 04:52:29.161033] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.968 04:52:29 -- common/autotest_common.sh@950 -- # wait 51146 00:15:14.968 [2024-05-15 04:52:29.161088] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.968 [2024-05-15 04:52:29.161130] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.968 [2024-05-15 04:52:29.161140] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002ee80 name raid_bdev1, state offline 00:15:15.227 [2024-05-15 04:52:29.457748] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.130 ************************************ 00:15:17.130 END TEST raid_superblock_test 00:15:17.131 ************************************ 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:17.131 00:15:17.131 real 0m10.613s 00:15:17.131 user 0m17.235s 00:15:17.131 sys 0m1.382s 00:15:17.131 04:52:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.131 04:52:30 -- common/autotest_common.sh@10 -- # set +x 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:15:17.131 04:52:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:17.131 04:52:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.131 04:52:30 -- common/autotest_common.sh@10 -- # set +x 00:15:17.131 ************************************ 00:15:17.131 START TEST raid_state_function_test 00:15:17.131 ************************************ 00:15:17.131 04:52:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:17.131 Process raid pid: 51455 00:15:17.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=51455 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51455' 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:17.131 04:52:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51455 /var/tmp/spdk-raid.sock 00:15:17.131 04:52:30 -- common/autotest_common.sh@819 -- # '[' -z 51455 ']' 00:15:17.131 04:52:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.131 04:52:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.131 04:52:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.131 04:52:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.131 04:52:30 -- common/autotest_common.sh@10 -- # set +x 00:15:17.131 [2024-05-15 04:52:31.091300] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:17.131 [2024-05-15 04:52:31.091441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.131 [2024-05-15 04:52:31.241318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.389 [2024-05-15 04:52:31.484508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.647 [2024-05-15 04:52:31.746827] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.581 04:52:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.581 04:52:32 -- common/autotest_common.sh@852 -- # return 0 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:18.581 [2024-05-15 04:52:32.706056] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.581 [2024-05-15 04:52:32.706128] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.581 [2024-05-15 04:52:32.706139] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.581 [2024-05-15 04:52:32.706158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.581 [2024-05-15 04:52:32.706165] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:18.581 [2024-05-15 04:52:32.706211] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.581 04:52:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.840 04:52:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.840 "name": "Existed_Raid", 00:15:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.840 "strip_size_kb": 0, 00:15:18.840 "state": "configuring", 00:15:18.840 "raid_level": "raid1", 00:15:18.840 "superblock": false, 00:15:18.840 "num_base_bdevs": 3, 00:15:18.840 "num_base_bdevs_discovered": 0, 00:15:18.840 "num_base_bdevs_operational": 3, 00:15:18.840 "base_bdevs_list": [ 00:15:18.840 { 00:15:18.840 "name": "BaseBdev1", 00:15:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.840 "is_configured": false, 00:15:18.840 "data_offset": 0, 00:15:18.840 "data_size": 0 00:15:18.840 }, 00:15:18.840 { 00:15:18.840 "name": "BaseBdev2", 00:15:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.840 "is_configured": false, 00:15:18.840 "data_offset": 0, 00:15:18.840 "data_size": 0 00:15:18.840 }, 00:15:18.840 { 00:15:18.840 "name": "BaseBdev3", 00:15:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.840 "is_configured": false, 00:15:18.840 "data_offset": 0, 00:15:18.840 "data_size": 0 00:15:18.840 } 00:15:18.840 ] 00:15:18.840 }' 00:15:18.840 04:52:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.840 04:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:19.409 04:52:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:19.409 [2024-05-15 04:52:33.550144] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.409 [2024-05-15 04:52:33.550188] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:15:19.409 04:52:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:19.667 [2024-05-15 04:52:33.686150] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.667 [2024-05-15 04:52:33.686215] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.667 [2024-05-15 04:52:33.686226] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.667 [2024-05-15 04:52:33.686245] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.667 [2024-05-15 04:52:33.686253] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.667 [2024-05-15 04:52:33.686289] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.667 04:52:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.667 [2024-05-15 04:52:33.880634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.667 BaseBdev1 00:15:19.667 04:52:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:19.667 04:52:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:19.667 04:52:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:19.667 04:52:33 -- common/autotest_common.sh@889 -- # local i 00:15:19.667 04:52:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:19.667 04:52:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:19.667 04:52:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.925 04:52:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.184 [ 00:15:20.184 { 00:15:20.184 "name": "BaseBdev1", 00:15:20.184 "aliases": [ 00:15:20.184 "ef287b65-b0d5-457e-b7b4-da1738771f84" 00:15:20.184 ], 00:15:20.184 "product_name": "Malloc disk", 00:15:20.184 "block_size": 512, 00:15:20.184 "num_blocks": 65536, 00:15:20.184 "uuid": "ef287b65-b0d5-457e-b7b4-da1738771f84", 00:15:20.184 "assigned_rate_limits": { 00:15:20.184 "rw_ios_per_sec": 0, 00:15:20.184 "rw_mbytes_per_sec": 0, 00:15:20.184 "r_mbytes_per_sec": 0, 00:15:20.184 "w_mbytes_per_sec": 0 00:15:20.184 }, 00:15:20.184 "claimed": true, 00:15:20.184 "claim_type": "exclusive_write", 00:15:20.184 "zoned": false, 00:15:20.184 "supported_io_types": { 00:15:20.184 "read": true, 00:15:20.184 "write": true, 00:15:20.184 "unmap": true, 00:15:20.184 "write_zeroes": true, 00:15:20.184 "flush": true, 00:15:20.184 "reset": true, 00:15:20.184 "compare": false, 00:15:20.184 "compare_and_write": false, 00:15:20.184 "abort": true, 00:15:20.184 "nvme_admin": false, 00:15:20.184 "nvme_io": false 00:15:20.184 }, 00:15:20.184 "memory_domains": [ 00:15:20.184 { 00:15:20.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.184 "dma_device_type": 2 00:15:20.184 } 00:15:20.184 ], 00:15:20.184 "driver_specific": {} 00:15:20.184 } 00:15:20.184 ] 00:15:20.184 04:52:34 -- common/autotest_common.sh@895 -- # return 0 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.184 "name": "Existed_Raid", 00:15:20.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.184 "strip_size_kb": 0, 00:15:20.184 "state": "configuring", 00:15:20.184 "raid_level": "raid1", 00:15:20.184 "superblock": false, 00:15:20.184 "num_base_bdevs": 3, 00:15:20.184 "num_base_bdevs_discovered": 1, 00:15:20.184 "num_base_bdevs_operational": 3, 00:15:20.184 "base_bdevs_list": [ 00:15:20.184 { 00:15:20.184 "name": "BaseBdev1", 00:15:20.184 "uuid": "ef287b65-b0d5-457e-b7b4-da1738771f84", 00:15:20.184 "is_configured": true, 00:15:20.184 "data_offset": 0, 00:15:20.184 "data_size": 65536 00:15:20.184 }, 00:15:20.184 { 00:15:20.184 "name": "BaseBdev2", 00:15:20.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.184 "is_configured": false, 00:15:20.184 "data_offset": 0, 00:15:20.184 "data_size": 0 00:15:20.184 }, 00:15:20.184 { 00:15:20.184 "name": "BaseBdev3", 00:15:20.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.184 "is_configured": false, 00:15:20.184 "data_offset": 0, 00:15:20.184 "data_size": 0 00:15:20.184 } 00:15:20.184 ] 00:15:20.184 }' 00:15:20.184 04:52:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.184 04:52:34 -- common/autotest_common.sh@10 -- # set +x 00:15:21.121 04:52:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:21.121 [2024-05-15 04:52:35.184746] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.121 [2024-05-15 04:52:35.184803] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:21.121 [2024-05-15 04:52:35.320818] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.121 [2024-05-15 04:52:35.322087] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.121 [2024-05-15 04:52:35.322148] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.121 [2024-05-15 04:52:35.322159] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.121 [2024-05-15 04:52:35.322191] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.121 04:52:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.380 04:52:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.380 "name": "Existed_Raid", 00:15:21.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.380 "strip_size_kb": 0, 00:15:21.380 "state": "configuring", 00:15:21.380 "raid_level": "raid1", 00:15:21.380 "superblock": false, 00:15:21.380 "num_base_bdevs": 3, 00:15:21.380 "num_base_bdevs_discovered": 1, 00:15:21.380 "num_base_bdevs_operational": 3, 00:15:21.380 "base_bdevs_list": [ 00:15:21.380 { 00:15:21.380 "name": "BaseBdev1", 00:15:21.380 "uuid": "ef287b65-b0d5-457e-b7b4-da1738771f84", 00:15:21.380 "is_configured": true, 00:15:21.380 "data_offset": 0, 00:15:21.380 "data_size": 65536 00:15:21.380 }, 00:15:21.380 { 00:15:21.380 "name": "BaseBdev2", 00:15:21.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.380 "is_configured": false, 00:15:21.380 "data_offset": 0, 00:15:21.380 "data_size": 0 00:15:21.380 }, 00:15:21.380 { 00:15:21.380 "name": "BaseBdev3", 00:15:21.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.380 "is_configured": false, 00:15:21.380 "data_offset": 0, 00:15:21.380 "data_size": 0 00:15:21.380 } 00:15:21.380 ] 00:15:21.380 }' 00:15:21.380 04:52:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.380 04:52:35 -- common/autotest_common.sh@10 -- # set +x 00:15:21.947 04:52:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.206 [2024-05-15 04:52:36.364337] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.206 BaseBdev2 00:15:22.206 04:52:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:22.206 04:52:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:22.206 04:52:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:22.206 04:52:36 -- common/autotest_common.sh@889 -- # local i 00:15:22.206 04:52:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:22.206 04:52:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:22.206 04:52:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.465 04:52:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.465 [ 00:15:22.465 { 00:15:22.465 "name": "BaseBdev2", 00:15:22.465 "aliases": [ 00:15:22.465 "7c316a77-5930-4786-bb44-4064df35edf7" 00:15:22.465 ], 00:15:22.465 "product_name": "Malloc disk", 00:15:22.465 "block_size": 512, 00:15:22.465 "num_blocks": 65536, 00:15:22.465 "uuid": "7c316a77-5930-4786-bb44-4064df35edf7", 00:15:22.465 "assigned_rate_limits": { 00:15:22.465 "rw_ios_per_sec": 0, 00:15:22.465 "rw_mbytes_per_sec": 0, 00:15:22.465 "r_mbytes_per_sec": 0, 00:15:22.465 "w_mbytes_per_sec": 0 00:15:22.465 }, 00:15:22.465 "claimed": true, 00:15:22.465 "claim_type": "exclusive_write", 00:15:22.465 "zoned": false, 00:15:22.465 "supported_io_types": { 00:15:22.465 "read": true, 00:15:22.465 "write": true, 00:15:22.465 "unmap": true, 00:15:22.465 "write_zeroes": true, 00:15:22.465 "flush": true, 00:15:22.465 "reset": true, 00:15:22.465 "compare": false, 00:15:22.465 "compare_and_write": false, 00:15:22.465 "abort": true, 00:15:22.465 "nvme_admin": false, 00:15:22.465 "nvme_io": false 00:15:22.465 }, 00:15:22.465 "memory_domains": [ 00:15:22.465 { 00:15:22.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.465 "dma_device_type": 2 00:15:22.465 } 00:15:22.465 ], 00:15:22.465 "driver_specific": {} 00:15:22.465 } 00:15:22.465 ] 00:15:22.465 04:52:36 -- common/autotest_common.sh@895 -- # return 0 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.465 04:52:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.724 04:52:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.724 "name": "Existed_Raid", 00:15:22.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.724 "strip_size_kb": 0, 00:15:22.724 "state": "configuring", 00:15:22.724 "raid_level": "raid1", 00:15:22.724 "superblock": false, 00:15:22.724 "num_base_bdevs": 3, 00:15:22.724 "num_base_bdevs_discovered": 2, 00:15:22.724 "num_base_bdevs_operational": 3, 00:15:22.724 "base_bdevs_list": [ 00:15:22.724 { 00:15:22.724 "name": "BaseBdev1", 00:15:22.724 "uuid": "ef287b65-b0d5-457e-b7b4-da1738771f84", 00:15:22.724 "is_configured": true, 00:15:22.724 "data_offset": 0, 00:15:22.724 "data_size": 65536 00:15:22.724 }, 00:15:22.724 { 00:15:22.724 "name": "BaseBdev2", 00:15:22.724 "uuid": "7c316a77-5930-4786-bb44-4064df35edf7", 00:15:22.724 "is_configured": true, 00:15:22.724 "data_offset": 0, 00:15:22.724 "data_size": 65536 00:15:22.724 }, 00:15:22.724 { 00:15:22.724 "name": "BaseBdev3", 00:15:22.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.724 "is_configured": false, 00:15:22.724 "data_offset": 0, 00:15:22.724 "data_size": 0 00:15:22.724 } 00:15:22.724 ] 00:15:22.724 }' 00:15:22.724 04:52:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.724 04:52:36 -- common/autotest_common.sh@10 -- # set +x 00:15:23.292 04:52:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:23.551 [2024-05-15 04:52:37.725153] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.551 [2024-05-15 04:52:37.725216] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:15:23.551 [2024-05-15 04:52:37.725226] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:23.551 [2024-05-15 04:52:37.725317] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:23.551 [2024-05-15 04:52:37.725527] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:15:23.551 [2024-05-15 04:52:37.725537] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:15:23.551 BaseBdev3 00:15:23.551 [2024-05-15 04:52:37.725985] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.551 04:52:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:23.551 04:52:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:23.551 04:52:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:23.551 04:52:37 -- common/autotest_common.sh@889 -- # local i 00:15:23.551 04:52:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:23.551 04:52:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:23.551 04:52:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.810 04:52:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.069 [ 00:15:24.069 { 00:15:24.069 "name": "BaseBdev3", 00:15:24.069 "aliases": [ 00:15:24.069 "0214bc77-f711-4369-85f0-e8b06bb3fce6" 00:15:24.069 ], 00:15:24.069 "product_name": "Malloc disk", 00:15:24.069 "block_size": 512, 00:15:24.069 "num_blocks": 65536, 00:15:24.069 "uuid": "0214bc77-f711-4369-85f0-e8b06bb3fce6", 00:15:24.069 "assigned_rate_limits": { 00:15:24.069 "rw_ios_per_sec": 0, 00:15:24.069 "rw_mbytes_per_sec": 0, 00:15:24.069 "r_mbytes_per_sec": 0, 00:15:24.069 "w_mbytes_per_sec": 0 00:15:24.069 }, 00:15:24.069 "claimed": true, 00:15:24.069 "claim_type": "exclusive_write", 00:15:24.069 "zoned": false, 00:15:24.069 "supported_io_types": { 00:15:24.069 "read": true, 00:15:24.069 "write": true, 00:15:24.069 "unmap": true, 00:15:24.069 "write_zeroes": true, 00:15:24.069 "flush": true, 00:15:24.069 "reset": true, 00:15:24.069 "compare": false, 00:15:24.069 "compare_and_write": false, 00:15:24.069 "abort": true, 00:15:24.069 "nvme_admin": false, 00:15:24.069 "nvme_io": false 00:15:24.069 }, 00:15:24.069 "memory_domains": [ 00:15:24.069 { 00:15:24.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.069 "dma_device_type": 2 00:15:24.069 } 00:15:24.069 ], 00:15:24.069 "driver_specific": {} 00:15:24.069 } 00:15:24.069 ] 00:15:24.069 04:52:38 -- common/autotest_common.sh@895 -- # return 0 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.069 "name": "Existed_Raid", 00:15:24.069 "uuid": "352dd3b8-bc46-4390-b4da-609d241c8288", 00:15:24.069 "strip_size_kb": 0, 00:15:24.069 "state": "online", 00:15:24.069 "raid_level": "raid1", 00:15:24.069 "superblock": false, 00:15:24.069 "num_base_bdevs": 3, 00:15:24.069 "num_base_bdevs_discovered": 3, 00:15:24.069 "num_base_bdevs_operational": 3, 00:15:24.069 "base_bdevs_list": [ 00:15:24.069 { 00:15:24.069 "name": "BaseBdev1", 00:15:24.069 "uuid": "ef287b65-b0d5-457e-b7b4-da1738771f84", 00:15:24.069 "is_configured": true, 00:15:24.069 "data_offset": 0, 00:15:24.069 "data_size": 65536 00:15:24.069 }, 00:15:24.069 { 00:15:24.069 "name": "BaseBdev2", 00:15:24.069 "uuid": "7c316a77-5930-4786-bb44-4064df35edf7", 00:15:24.069 "is_configured": true, 00:15:24.069 "data_offset": 0, 00:15:24.069 "data_size": 65536 00:15:24.069 }, 00:15:24.069 { 00:15:24.069 "name": "BaseBdev3", 00:15:24.069 "uuid": "0214bc77-f711-4369-85f0-e8b06bb3fce6", 00:15:24.069 "is_configured": true, 00:15:24.069 "data_offset": 0, 00:15:24.069 "data_size": 65536 00:15:24.069 } 00:15:24.069 ] 00:15:24.069 }' 00:15:24.069 04:52:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.069 04:52:38 -- common/autotest_common.sh@10 -- # set +x 00:15:24.637 04:52:38 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:24.897 [2024-05-15 04:52:38.993348] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.897 04:52:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.156 04:52:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.156 "name": "Existed_Raid", 00:15:25.156 "uuid": "352dd3b8-bc46-4390-b4da-609d241c8288", 00:15:25.156 "strip_size_kb": 0, 00:15:25.156 "state": "online", 00:15:25.156 "raid_level": "raid1", 00:15:25.156 "superblock": false, 00:15:25.156 "num_base_bdevs": 3, 00:15:25.156 "num_base_bdevs_discovered": 2, 00:15:25.156 "num_base_bdevs_operational": 2, 00:15:25.156 "base_bdevs_list": [ 00:15:25.156 { 00:15:25.156 "name": null, 00:15:25.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.156 "is_configured": false, 00:15:25.156 "data_offset": 0, 00:15:25.156 "data_size": 65536 00:15:25.156 }, 00:15:25.156 { 00:15:25.156 "name": "BaseBdev2", 00:15:25.156 "uuid": "7c316a77-5930-4786-bb44-4064df35edf7", 00:15:25.156 "is_configured": true, 00:15:25.156 "data_offset": 0, 00:15:25.156 "data_size": 65536 00:15:25.156 }, 00:15:25.156 { 00:15:25.156 "name": "BaseBdev3", 00:15:25.156 "uuid": "0214bc77-f711-4369-85f0-e8b06bb3fce6", 00:15:25.156 "is_configured": true, 00:15:25.156 "data_offset": 0, 00:15:25.156 "data_size": 65536 00:15:25.156 } 00:15:25.156 ] 00:15:25.156 }' 00:15:25.156 04:52:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.156 04:52:39 -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 04:52:39 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:25.724 04:52:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:25.724 04:52:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:25.724 04:52:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.983 04:52:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:25.983 04:52:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.983 04:52:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:25.983 [2024-05-15 04:52:40.175002] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.242 04:52:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:26.242 04:52:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.242 04:52:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.242 04:52:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:26.501 04:52:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:26.501 04:52:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.501 04:52:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:26.501 [2024-05-15 04:52:40.716187] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.501 [2024-05-15 04:52:40.716221] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.501 [2024-05-15 04:52:40.716269] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.761 [2024-05-15 04:52:40.815701] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.761 [2024-05-15 04:52:40.815749] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:15:26.761 04:52:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:26.761 04:52:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.761 04:52:40 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.761 04:52:40 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:26.761 04:52:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:26.761 04:52:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:26.761 04:52:40 -- bdev/bdev_raid.sh@287 -- # killprocess 51455 00:15:26.761 04:52:40 -- common/autotest_common.sh@926 -- # '[' -z 51455 ']' 00:15:26.761 04:52:40 -- common/autotest_common.sh@930 -- # kill -0 51455 00:15:26.761 04:52:40 -- common/autotest_common.sh@931 -- # uname 00:15:27.019 04:52:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:27.019 04:52:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 51455 00:15:27.019 killing process with pid 51455 00:15:27.019 04:52:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:27.019 04:52:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:27.019 04:52:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51455' 00:15:27.019 04:52:41 -- common/autotest_common.sh@945 -- # kill 51455 00:15:27.019 04:52:41 -- common/autotest_common.sh@950 -- # wait 51455 00:15:27.019 [2024-05-15 04:52:41.018176] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.019 [2024-05-15 04:52:41.018294] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.396 ************************************ 00:15:28.396 END TEST raid_state_function_test 00:15:28.396 ************************************ 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:28.396 00:15:28.396 real 0m11.503s 00:15:28.396 user 0m19.012s 00:15:28.396 sys 0m1.533s 00:15:28.396 04:52:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.396 04:52:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:15:28.396 04:52:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:28.396 04:52:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:28.396 04:52:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.396 ************************************ 00:15:28.396 START TEST raid_state_function_test_sb 00:15:28.396 ************************************ 00:15:28.396 04:52:42 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:28.396 Process raid pid: 51828 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@226 -- # raid_pid=51828 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51828' 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51828 /var/tmp/spdk-raid.sock 00:15:28.396 04:52:42 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:28.396 04:52:42 -- common/autotest_common.sh@819 -- # '[' -z 51828 ']' 00:15:28.396 04:52:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:28.396 04:52:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.396 04:52:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:28.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:28.396 04:52:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.396 04:52:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.655 [2024-05-15 04:52:42.655899] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:28.655 [2024-05-15 04:52:42.656122] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.656 [2024-05-15 04:52:42.809284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.915 [2024-05-15 04:52:43.059426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.172 [2024-05-15 04:52:43.328322] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.108 04:52:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:30.108 04:52:44 -- common/autotest_common.sh@852 -- # return 0 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:30.108 [2024-05-15 04:52:44.306974] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.108 [2024-05-15 04:52:44.307046] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.108 [2024-05-15 04:52:44.307058] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.108 [2024-05-15 04:52:44.307075] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.108 [2024-05-15 04:52:44.307083] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.108 [2024-05-15 04:52:44.307125] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.108 04:52:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.367 04:52:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.367 "name": "Existed_Raid", 00:15:30.367 "uuid": "c67fed82-7158-452e-9bc1-29d22abb065e", 00:15:30.367 "strip_size_kb": 0, 00:15:30.367 "state": "configuring", 00:15:30.367 "raid_level": "raid1", 00:15:30.367 "superblock": true, 00:15:30.367 "num_base_bdevs": 3, 00:15:30.367 "num_base_bdevs_discovered": 0, 00:15:30.367 "num_base_bdevs_operational": 3, 00:15:30.367 "base_bdevs_list": [ 00:15:30.367 { 00:15:30.367 "name": "BaseBdev1", 00:15:30.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.367 "is_configured": false, 00:15:30.367 "data_offset": 0, 00:15:30.367 "data_size": 0 00:15:30.367 }, 00:15:30.367 { 00:15:30.367 "name": "BaseBdev2", 00:15:30.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.367 "is_configured": false, 00:15:30.367 "data_offset": 0, 00:15:30.367 "data_size": 0 00:15:30.367 }, 00:15:30.367 { 00:15:30.367 "name": "BaseBdev3", 00:15:30.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.367 "is_configured": false, 00:15:30.367 "data_offset": 0, 00:15:30.367 "data_size": 0 00:15:30.367 } 00:15:30.367 ] 00:15:30.367 }' 00:15:30.367 04:52:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.367 04:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:30.934 04:52:44 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:30.934 [2024-05-15 04:52:45.106902] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.934 [2024-05-15 04:52:45.106941] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:15:30.934 04:52:45 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:31.192 [2024-05-15 04:52:45.250987] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.192 [2024-05-15 04:52:45.251049] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.192 [2024-05-15 04:52:45.251059] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.192 [2024-05-15 04:52:45.251093] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.192 [2024-05-15 04:52:45.251100] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.192 [2024-05-15 04:52:45.251134] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.192 04:52:45 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:31.450 [2024-05-15 04:52:45.448276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.450 BaseBdev1 00:15:31.450 04:52:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:31.450 04:52:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:31.450 04:52:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:31.450 04:52:45 -- common/autotest_common.sh@889 -- # local i 00:15:31.450 04:52:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:31.450 04:52:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:31.450 04:52:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.450 04:52:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:31.709 [ 00:15:31.709 { 00:15:31.709 "name": "BaseBdev1", 00:15:31.709 "aliases": [ 00:15:31.709 "7086cae8-4648-4187-bd14-0347c3777561" 00:15:31.709 ], 00:15:31.709 "product_name": "Malloc disk", 00:15:31.709 "block_size": 512, 00:15:31.709 "num_blocks": 65536, 00:15:31.709 "uuid": "7086cae8-4648-4187-bd14-0347c3777561", 00:15:31.709 "assigned_rate_limits": { 00:15:31.709 "rw_ios_per_sec": 0, 00:15:31.709 "rw_mbytes_per_sec": 0, 00:15:31.709 "r_mbytes_per_sec": 0, 00:15:31.709 "w_mbytes_per_sec": 0 00:15:31.709 }, 00:15:31.709 "claimed": true, 00:15:31.709 "claim_type": "exclusive_write", 00:15:31.709 "zoned": false, 00:15:31.709 "supported_io_types": { 00:15:31.709 "read": true, 00:15:31.709 "write": true, 00:15:31.709 "unmap": true, 00:15:31.709 "write_zeroes": true, 00:15:31.709 "flush": true, 00:15:31.709 "reset": true, 00:15:31.709 "compare": false, 00:15:31.709 "compare_and_write": false, 00:15:31.709 "abort": true, 00:15:31.709 "nvme_admin": false, 00:15:31.709 "nvme_io": false 00:15:31.709 }, 00:15:31.709 "memory_domains": [ 00:15:31.709 { 00:15:31.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.709 "dma_device_type": 2 00:15:31.709 } 00:15:31.709 ], 00:15:31.709 "driver_specific": {} 00:15:31.709 } 00:15:31.709 ] 00:15:31.709 04:52:45 -- common/autotest_common.sh@895 -- # return 0 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.709 04:52:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.967 04:52:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.967 "name": "Existed_Raid", 00:15:31.967 "uuid": "d4f9d75d-343f-42ae-94ac-9adc07b42521", 00:15:31.967 "strip_size_kb": 0, 00:15:31.967 "state": "configuring", 00:15:31.967 "raid_level": "raid1", 00:15:31.967 "superblock": true, 00:15:31.967 "num_base_bdevs": 3, 00:15:31.967 "num_base_bdevs_discovered": 1, 00:15:31.967 "num_base_bdevs_operational": 3, 00:15:31.967 "base_bdevs_list": [ 00:15:31.967 { 00:15:31.967 "name": "BaseBdev1", 00:15:31.967 "uuid": "7086cae8-4648-4187-bd14-0347c3777561", 00:15:31.967 "is_configured": true, 00:15:31.967 "data_offset": 2048, 00:15:31.968 "data_size": 63488 00:15:31.968 }, 00:15:31.968 { 00:15:31.968 "name": "BaseBdev2", 00:15:31.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.968 "is_configured": false, 00:15:31.968 "data_offset": 0, 00:15:31.968 "data_size": 0 00:15:31.968 }, 00:15:31.968 { 00:15:31.968 "name": "BaseBdev3", 00:15:31.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.968 "is_configured": false, 00:15:31.968 "data_offset": 0, 00:15:31.968 "data_size": 0 00:15:31.968 } 00:15:31.968 ] 00:15:31.968 }' 00:15:31.968 04:52:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.968 04:52:46 -- common/autotest_common.sh@10 -- # set +x 00:15:32.535 04:52:46 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.535 [2024-05-15 04:52:46.748396] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.535 [2024-05-15 04:52:46.748446] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:15:32.535 04:52:46 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:32.535 04:52:46 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:32.793 04:52:46 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.051 BaseBdev1 00:15:33.051 04:52:47 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:33.051 04:52:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:33.051 04:52:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:33.051 04:52:47 -- common/autotest_common.sh@889 -- # local i 00:15:33.051 04:52:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:33.051 04:52:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:33.051 04:52:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.310 04:52:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.310 [ 00:15:33.310 { 00:15:33.310 "name": "BaseBdev1", 00:15:33.310 "aliases": [ 00:15:33.310 "bc56c807-11ca-4cec-9974-9254796ce5d2" 00:15:33.310 ], 00:15:33.310 "product_name": "Malloc disk", 00:15:33.310 "block_size": 512, 00:15:33.310 "num_blocks": 65536, 00:15:33.310 "uuid": "bc56c807-11ca-4cec-9974-9254796ce5d2", 00:15:33.310 "assigned_rate_limits": { 00:15:33.310 "rw_ios_per_sec": 0, 00:15:33.310 "rw_mbytes_per_sec": 0, 00:15:33.310 "r_mbytes_per_sec": 0, 00:15:33.310 "w_mbytes_per_sec": 0 00:15:33.310 }, 00:15:33.310 "claimed": false, 00:15:33.310 "zoned": false, 00:15:33.310 "supported_io_types": { 00:15:33.310 "read": true, 00:15:33.310 "write": true, 00:15:33.310 "unmap": true, 00:15:33.310 "write_zeroes": true, 00:15:33.310 "flush": true, 00:15:33.310 "reset": true, 00:15:33.310 "compare": false, 00:15:33.310 "compare_and_write": false, 00:15:33.310 "abort": true, 00:15:33.310 "nvme_admin": false, 00:15:33.310 "nvme_io": false 00:15:33.310 }, 00:15:33.310 "memory_domains": [ 00:15:33.310 { 00:15:33.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.310 "dma_device_type": 2 00:15:33.310 } 00:15:33.310 ], 00:15:33.310 "driver_specific": {} 00:15:33.310 } 00:15:33.310 ] 00:15:33.310 04:52:47 -- common/autotest_common.sh@895 -- # return 0 00:15:33.310 04:52:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:33.569 [2024-05-15 04:52:47.577150] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.569 [2024-05-15 04:52:47.578390] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.569 [2024-05-15 04:52:47.578442] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.569 [2024-05-15 04:52:47.578452] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.569 [2024-05-15 04:52:47.578475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.569 "name": "Existed_Raid", 00:15:33.569 "uuid": "b1dc58ff-490b-4e53-93ab-26af188c05da", 00:15:33.569 "strip_size_kb": 0, 00:15:33.569 "state": "configuring", 00:15:33.569 "raid_level": "raid1", 00:15:33.569 "superblock": true, 00:15:33.569 "num_base_bdevs": 3, 00:15:33.569 "num_base_bdevs_discovered": 1, 00:15:33.569 "num_base_bdevs_operational": 3, 00:15:33.569 "base_bdevs_list": [ 00:15:33.569 { 00:15:33.569 "name": "BaseBdev1", 00:15:33.569 "uuid": "bc56c807-11ca-4cec-9974-9254796ce5d2", 00:15:33.569 "is_configured": true, 00:15:33.569 "data_offset": 2048, 00:15:33.569 "data_size": 63488 00:15:33.569 }, 00:15:33.569 { 00:15:33.569 "name": "BaseBdev2", 00:15:33.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.569 "is_configured": false, 00:15:33.569 "data_offset": 0, 00:15:33.569 "data_size": 0 00:15:33.569 }, 00:15:33.569 { 00:15:33.569 "name": "BaseBdev3", 00:15:33.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.569 "is_configured": false, 00:15:33.569 "data_offset": 0, 00:15:33.569 "data_size": 0 00:15:33.569 } 00:15:33.569 ] 00:15:33.569 }' 00:15:33.569 04:52:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.569 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:15:34.505 04:52:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:34.505 [2024-05-15 04:52:48.618202] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.505 BaseBdev2 00:15:34.505 04:52:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:34.505 04:52:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:34.505 04:52:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:34.505 04:52:48 -- common/autotest_common.sh@889 -- # local i 00:15:34.506 04:52:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:34.506 04:52:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:34.506 04:52:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.764 04:52:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.764 [ 00:15:34.764 { 00:15:34.764 "name": "BaseBdev2", 00:15:34.764 "aliases": [ 00:15:34.764 "dcb57c9c-0a7b-4dcc-9fe6-bd770d5a0cd5" 00:15:34.764 ], 00:15:34.764 "product_name": "Malloc disk", 00:15:34.764 "block_size": 512, 00:15:34.765 "num_blocks": 65536, 00:15:34.765 "uuid": "dcb57c9c-0a7b-4dcc-9fe6-bd770d5a0cd5", 00:15:34.765 "assigned_rate_limits": { 00:15:34.765 "rw_ios_per_sec": 0, 00:15:34.765 "rw_mbytes_per_sec": 0, 00:15:34.765 "r_mbytes_per_sec": 0, 00:15:34.765 "w_mbytes_per_sec": 0 00:15:34.765 }, 00:15:34.765 "claimed": true, 00:15:34.765 "claim_type": "exclusive_write", 00:15:34.765 "zoned": false, 00:15:34.765 "supported_io_types": { 00:15:34.765 "read": true, 00:15:34.765 "write": true, 00:15:34.765 "unmap": true, 00:15:34.765 "write_zeroes": true, 00:15:34.765 "flush": true, 00:15:34.765 "reset": true, 00:15:34.765 "compare": false, 00:15:34.765 "compare_and_write": false, 00:15:34.765 "abort": true, 00:15:34.765 "nvme_admin": false, 00:15:34.765 "nvme_io": false 00:15:34.765 }, 00:15:34.765 "memory_domains": [ 00:15:34.765 { 00:15:34.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.765 "dma_device_type": 2 00:15:34.765 } 00:15:34.765 ], 00:15:34.765 "driver_specific": {} 00:15:34.765 } 00:15:34.765 ] 00:15:34.765 04:52:48 -- common/autotest_common.sh@895 -- # return 0 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.765 04:52:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.024 04:52:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:35.024 "name": "Existed_Raid", 00:15:35.024 "uuid": "b1dc58ff-490b-4e53-93ab-26af188c05da", 00:15:35.024 "strip_size_kb": 0, 00:15:35.024 "state": "configuring", 00:15:35.024 "raid_level": "raid1", 00:15:35.024 "superblock": true, 00:15:35.024 "num_base_bdevs": 3, 00:15:35.024 "num_base_bdevs_discovered": 2, 00:15:35.024 "num_base_bdevs_operational": 3, 00:15:35.024 "base_bdevs_list": [ 00:15:35.024 { 00:15:35.024 "name": "BaseBdev1", 00:15:35.024 "uuid": "bc56c807-11ca-4cec-9974-9254796ce5d2", 00:15:35.024 "is_configured": true, 00:15:35.024 "data_offset": 2048, 00:15:35.024 "data_size": 63488 00:15:35.024 }, 00:15:35.024 { 00:15:35.024 "name": "BaseBdev2", 00:15:35.024 "uuid": "dcb57c9c-0a7b-4dcc-9fe6-bd770d5a0cd5", 00:15:35.024 "is_configured": true, 00:15:35.024 "data_offset": 2048, 00:15:35.024 "data_size": 63488 00:15:35.024 }, 00:15:35.024 { 00:15:35.024 "name": "BaseBdev3", 00:15:35.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.024 "is_configured": false, 00:15:35.024 "data_offset": 0, 00:15:35.024 "data_size": 0 00:15:35.024 } 00:15:35.024 ] 00:15:35.024 }' 00:15:35.024 04:52:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:35.024 04:52:49 -- common/autotest_common.sh@10 -- # set +x 00:15:35.592 04:52:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.850 [2024-05-15 04:52:49.974633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.850 BaseBdev3 00:15:35.850 [2024-05-15 04:52:49.974986] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:15:35.850 [2024-05-15 04:52:49.975005] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:35.850 [2024-05-15 04:52:49.975095] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:35.850 [2024-05-15 04:52:49.975322] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:15:35.850 [2024-05-15 04:52:49.975333] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:15:35.850 [2024-05-15 04:52:49.975424] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.850 04:52:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:35.850 04:52:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:35.850 04:52:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:35.850 04:52:49 -- common/autotest_common.sh@889 -- # local i 00:15:35.850 04:52:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:35.850 04:52:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:35.851 04:52:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.109 04:52:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:36.368 [ 00:15:36.368 { 00:15:36.368 "name": "BaseBdev3", 00:15:36.368 "aliases": [ 00:15:36.368 "114beef7-8619-418d-aeee-3835e4979b37" 00:15:36.368 ], 00:15:36.368 "product_name": "Malloc disk", 00:15:36.368 "block_size": 512, 00:15:36.368 "num_blocks": 65536, 00:15:36.368 "uuid": "114beef7-8619-418d-aeee-3835e4979b37", 00:15:36.368 "assigned_rate_limits": { 00:15:36.368 "rw_ios_per_sec": 0, 00:15:36.368 "rw_mbytes_per_sec": 0, 00:15:36.368 "r_mbytes_per_sec": 0, 00:15:36.368 "w_mbytes_per_sec": 0 00:15:36.368 }, 00:15:36.368 "claimed": true, 00:15:36.368 "claim_type": "exclusive_write", 00:15:36.368 "zoned": false, 00:15:36.368 "supported_io_types": { 00:15:36.368 "read": true, 00:15:36.368 "write": true, 00:15:36.368 "unmap": true, 00:15:36.368 "write_zeroes": true, 00:15:36.368 "flush": true, 00:15:36.368 "reset": true, 00:15:36.368 "compare": false, 00:15:36.368 "compare_and_write": false, 00:15:36.368 "abort": true, 00:15:36.368 "nvme_admin": false, 00:15:36.368 "nvme_io": false 00:15:36.368 }, 00:15:36.368 "memory_domains": [ 00:15:36.368 { 00:15:36.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.368 "dma_device_type": 2 00:15:36.368 } 00:15:36.368 ], 00:15:36.368 "driver_specific": {} 00:15:36.368 } 00:15:36.368 ] 00:15:36.368 04:52:50 -- common/autotest_common.sh@895 -- # return 0 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.368 04:52:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.627 04:52:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.627 "name": "Existed_Raid", 00:15:36.627 "uuid": "b1dc58ff-490b-4e53-93ab-26af188c05da", 00:15:36.627 "strip_size_kb": 0, 00:15:36.627 "state": "online", 00:15:36.627 "raid_level": "raid1", 00:15:36.627 "superblock": true, 00:15:36.627 "num_base_bdevs": 3, 00:15:36.627 "num_base_bdevs_discovered": 3, 00:15:36.627 "num_base_bdevs_operational": 3, 00:15:36.627 "base_bdevs_list": [ 00:15:36.627 { 00:15:36.627 "name": "BaseBdev1", 00:15:36.627 "uuid": "bc56c807-11ca-4cec-9974-9254796ce5d2", 00:15:36.627 "is_configured": true, 00:15:36.627 "data_offset": 2048, 00:15:36.627 "data_size": 63488 00:15:36.627 }, 00:15:36.627 { 00:15:36.627 "name": "BaseBdev2", 00:15:36.627 "uuid": "dcb57c9c-0a7b-4dcc-9fe6-bd770d5a0cd5", 00:15:36.627 "is_configured": true, 00:15:36.627 "data_offset": 2048, 00:15:36.627 "data_size": 63488 00:15:36.627 }, 00:15:36.627 { 00:15:36.627 "name": "BaseBdev3", 00:15:36.627 "uuid": "114beef7-8619-418d-aeee-3835e4979b37", 00:15:36.627 "is_configured": true, 00:15:36.627 "data_offset": 2048, 00:15:36.627 "data_size": 63488 00:15:36.627 } 00:15:36.627 ] 00:15:36.627 }' 00:15:36.627 04:52:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.627 04:52:50 -- common/autotest_common.sh@10 -- # set +x 00:15:37.194 04:52:51 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:37.194 [2024-05-15 04:52:51.362920] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.453 "name": "Existed_Raid", 00:15:37.453 "uuid": "b1dc58ff-490b-4e53-93ab-26af188c05da", 00:15:37.453 "strip_size_kb": 0, 00:15:37.453 "state": "online", 00:15:37.453 "raid_level": "raid1", 00:15:37.453 "superblock": true, 00:15:37.453 "num_base_bdevs": 3, 00:15:37.453 "num_base_bdevs_discovered": 2, 00:15:37.453 "num_base_bdevs_operational": 2, 00:15:37.453 "base_bdevs_list": [ 00:15:37.453 { 00:15:37.453 "name": null, 00:15:37.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.453 "is_configured": false, 00:15:37.453 "data_offset": 2048, 00:15:37.453 "data_size": 63488 00:15:37.453 }, 00:15:37.453 { 00:15:37.453 "name": "BaseBdev2", 00:15:37.453 "uuid": "dcb57c9c-0a7b-4dcc-9fe6-bd770d5a0cd5", 00:15:37.453 "is_configured": true, 00:15:37.453 "data_offset": 2048, 00:15:37.453 "data_size": 63488 00:15:37.453 }, 00:15:37.453 { 00:15:37.453 "name": "BaseBdev3", 00:15:37.453 "uuid": "114beef7-8619-418d-aeee-3835e4979b37", 00:15:37.453 "is_configured": true, 00:15:37.453 "data_offset": 2048, 00:15:37.453 "data_size": 63488 00:15:37.453 } 00:15:37.453 ] 00:15:37.453 }' 00:15:37.453 04:52:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.453 04:52:51 -- common/autotest_common.sh@10 -- # set +x 00:15:38.028 04:52:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:38.028 04:52:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:38.028 04:52:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:38.028 04:52:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.306 04:52:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:38.306 04:52:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.306 04:52:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:38.575 [2024-05-15 04:52:52.600475] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.575 04:52:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:38.575 04:52:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:38.575 04:52:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:38.575 04:52:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.834 04:52:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:38.834 04:52:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.834 04:52:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:38.834 [2024-05-15 04:52:53.061903] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:38.834 [2024-05-15 04:52:53.061933] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.834 [2024-05-15 04:52:53.061971] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.093 [2024-05-15 04:52:53.159704] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.093 [2024-05-15 04:52:53.159749] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:15:39.093 04:52:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:39.093 04:52:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:39.093 04:52:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:39.093 04:52:53 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.351 04:52:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:39.351 04:52:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:39.351 04:52:53 -- bdev/bdev_raid.sh@287 -- # killprocess 51828 00:15:39.351 04:52:53 -- common/autotest_common.sh@926 -- # '[' -z 51828 ']' 00:15:39.351 04:52:53 -- common/autotest_common.sh@930 -- # kill -0 51828 00:15:39.351 04:52:53 -- common/autotest_common.sh@931 -- # uname 00:15:39.351 04:52:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:39.351 04:52:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 51828 00:15:39.351 killing process with pid 51828 00:15:39.351 04:52:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:39.351 04:52:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:39.351 04:52:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51828' 00:15:39.351 04:52:53 -- common/autotest_common.sh@945 -- # kill 51828 00:15:39.351 04:52:53 -- common/autotest_common.sh@950 -- # wait 51828 00:15:39.351 [2024-05-15 04:52:53.417498] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.352 [2024-05-15 04:52:53.417627] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.725 ************************************ 00:15:40.726 END TEST raid_state_function_test_sb 00:15:40.726 ************************************ 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:40.726 00:15:40.726 real 0m12.367s 00:15:40.726 user 0m20.543s 00:15:40.726 sys 0m1.566s 00:15:40.726 04:52:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.726 04:52:54 -- common/autotest_common.sh@10 -- # set +x 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:15:40.726 04:52:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:40.726 04:52:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:40.726 04:52:54 -- common/autotest_common.sh@10 -- # set +x 00:15:40.726 ************************************ 00:15:40.726 START TEST raid_superblock_test 00:15:40.726 ************************************ 00:15:40.726 04:52:54 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=52215 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:40.726 04:52:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 52215 /var/tmp/spdk-raid.sock 00:15:40.726 04:52:54 -- common/autotest_common.sh@819 -- # '[' -z 52215 ']' 00:15:40.726 04:52:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:40.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:40.726 04:52:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:40.726 04:52:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:40.726 04:52:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:40.726 04:52:54 -- common/autotest_common.sh@10 -- # set +x 00:15:40.984 [2024-05-15 04:52:55.083850] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:40.984 [2024-05-15 04:52:55.084086] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52215 ] 00:15:41.243 [2024-05-15 04:52:55.261786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.502 [2024-05-15 04:52:55.544957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.760 [2024-05-15 04:52:55.808681] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.697 04:52:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:42.697 04:52:56 -- common/autotest_common.sh@852 -- # return 0 00:15:42.697 04:52:56 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:42.697 04:52:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:42.697 04:52:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:42.697 04:52:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:42.697 04:52:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:42.697 04:52:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.697 04:52:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.697 04:52:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.697 04:52:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:42.697 malloc1 00:15:42.697 04:52:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:42.956 [2024-05-15 04:52:57.008092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:42.956 [2024-05-15 04:52:57.008175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.956 [2024-05-15 04:52:57.008230] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:15:42.956 [2024-05-15 04:52:57.008271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.956 [2024-05-15 04:52:57.009970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.956 [2024-05-15 04:52:57.010011] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:42.956 pt1 00:15:42.956 04:52:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:42.956 04:52:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:42.956 04:52:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:42.956 04:52:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:42.956 04:52:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:42.956 04:52:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.956 04:52:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.956 04:52:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.956 04:52:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:43.216 malloc2 00:15:43.216 04:52:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.216 [2024-05-15 04:52:57.332301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.216 [2024-05-15 04:52:57.332367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.216 [2024-05-15 04:52:57.332426] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:15:43.216 [2024-05-15 04:52:57.332460] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.216 [2024-05-15 04:52:57.333953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.216 [2024-05-15 04:52:57.333987] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.216 pt2 00:15:43.216 04:52:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:43.216 04:52:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:43.216 04:52:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:43.216 04:52:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:43.216 04:52:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:43.216 04:52:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.216 04:52:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.216 04:52:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.216 04:52:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:43.475 malloc3 00:15:43.475 04:52:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:43.475 [2024-05-15 04:52:57.653709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:43.475 [2024-05-15 04:52:57.653780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.475 [2024-05-15 04:52:57.653839] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:15:43.475 [2024-05-15 04:52:57.653871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.475 [2024-05-15 04:52:57.655317] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.475 [2024-05-15 04:52:57.655354] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:43.475 pt3 00:15:43.475 04:52:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:43.475 04:52:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:43.475 04:52:57 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:43.734 [2024-05-15 04:52:57.857821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.734 [2024-05-15 04:52:57.859473] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.734 [2024-05-15 04:52:57.859514] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:43.734 [2024-05-15 04:52:57.859614] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002c180 00:15:43.734 [2024-05-15 04:52:57.859624] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:43.734 [2024-05-15 04:52:57.859734] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:43.734 [2024-05-15 04:52:57.860005] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002c180 00:15:43.734 [2024-05-15 04:52:57.860015] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002c180 00:15:43.734 [2024-05-15 04:52:57.860126] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.734 04:52:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.993 04:52:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.993 "name": "raid_bdev1", 00:15:43.993 "uuid": "05ddc2a3-b57d-426a-b8ef-8581ab86b5ac", 00:15:43.993 "strip_size_kb": 0, 00:15:43.993 "state": "online", 00:15:43.993 "raid_level": "raid1", 00:15:43.993 "superblock": true, 00:15:43.993 "num_base_bdevs": 3, 00:15:43.993 "num_base_bdevs_discovered": 3, 00:15:43.993 "num_base_bdevs_operational": 3, 00:15:43.993 "base_bdevs_list": [ 00:15:43.993 { 00:15:43.993 "name": "pt1", 00:15:43.993 "uuid": "6885a03d-b603-5b7b-bbd1-64433bebde57", 00:15:43.993 "is_configured": true, 00:15:43.993 "data_offset": 2048, 00:15:43.993 "data_size": 63488 00:15:43.993 }, 00:15:43.993 { 00:15:43.993 "name": "pt2", 00:15:43.993 "uuid": "89fe78a2-998e-5581-ac18-79cdf528ea53", 00:15:43.993 "is_configured": true, 00:15:43.993 "data_offset": 2048, 00:15:43.993 "data_size": 63488 00:15:43.993 }, 00:15:43.993 { 00:15:43.993 "name": "pt3", 00:15:43.993 "uuid": "34a7be8a-85ce-556c-ad4e-827079b8245b", 00:15:43.993 "is_configured": true, 00:15:43.993 "data_offset": 2048, 00:15:43.993 "data_size": 63488 00:15:43.993 } 00:15:43.993 ] 00:15:43.993 }' 00:15:43.993 04:52:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.993 04:52:58 -- common/autotest_common.sh@10 -- # set +x 00:15:44.560 04:52:58 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:44.560 04:52:58 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:44.560 [2024-05-15 04:52:58.733938] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.560 04:52:58 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=05ddc2a3-b57d-426a-b8ef-8581ab86b5ac 00:15:44.560 04:52:58 -- bdev/bdev_raid.sh@380 -- # '[' -z 05ddc2a3-b57d-426a-b8ef-8581ab86b5ac ']' 00:15:44.560 04:52:58 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:44.819 [2024-05-15 04:52:58.925882] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.819 [2024-05-15 04:52:58.925907] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.819 [2024-05-15 04:52:58.925965] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.819 [2024-05-15 04:52:58.926012] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.819 [2024-05-15 04:52:58.926021] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c180 name raid_bdev1, state offline 00:15:44.819 04:52:58 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:44.819 04:52:58 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.078 04:52:59 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:45.078 04:52:59 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:45.078 04:52:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:45.078 04:52:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:45.078 04:52:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:45.078 04:52:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:45.337 04:52:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:45.337 04:52:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:45.596 04:52:59 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:45.596 04:52:59 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:45.596 04:52:59 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:45.596 04:52:59 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:45.596 04:52:59 -- common/autotest_common.sh@640 -- # local es=0 00:15:45.596 04:52:59 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:45.596 04:52:59 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.596 04:52:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.596 04:52:59 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.596 04:52:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.596 04:52:59 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.596 04:52:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.596 04:52:59 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.596 04:52:59 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:45.596 04:52:59 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:45.854 [2024-05-15 04:52:59.981983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:45.854 [2024-05-15 04:52:59.983674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:45.854 [2024-05-15 04:52:59.983745] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:45.854 [2024-05-15 04:52:59.983781] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:45.854 [2024-05-15 04:52:59.983852] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:45.854 [2024-05-15 04:52:59.983885] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:45.854 [2024-05-15 04:52:59.983928] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.854 [2024-05-15 04:52:59.983939] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c780 name raid_bdev1, state configuring 00:15:45.854 request: 00:15:45.854 { 00:15:45.854 "name": "raid_bdev1", 00:15:45.854 "raid_level": "raid1", 00:15:45.854 "base_bdevs": [ 00:15:45.854 "malloc1", 00:15:45.854 "malloc2", 00:15:45.854 "malloc3" 00:15:45.854 ], 00:15:45.854 "superblock": false, 00:15:45.854 "method": "bdev_raid_create", 00:15:45.854 "req_id": 1 00:15:45.854 } 00:15:45.854 Got JSON-RPC error response 00:15:45.854 response: 00:15:45.854 { 00:15:45.854 "code": -17, 00:15:45.854 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:45.854 } 00:15:45.854 04:52:59 -- common/autotest_common.sh@643 -- # es=1 00:15:45.854 04:52:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:45.854 04:52:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:45.854 04:52:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:45.854 04:52:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.854 04:52:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:46.113 04:53:00 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:46.113 04:53:00 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:46.113 04:53:00 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:46.372 [2024-05-15 04:53:00.358007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:46.372 [2024-05-15 04:53:00.358069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.372 [2024-05-15 04:53:00.358118] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:15:46.372 [2024-05-15 04:53:00.358142] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.372 [2024-05-15 04:53:00.359664] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.372 [2024-05-15 04:53:00.359700] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:46.372 [2024-05-15 04:53:00.359801] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:46.372 [2024-05-15 04:53:00.359861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:46.372 pt1 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.372 "name": "raid_bdev1", 00:15:46.372 "uuid": "05ddc2a3-b57d-426a-b8ef-8581ab86b5ac", 00:15:46.372 "strip_size_kb": 0, 00:15:46.372 "state": "configuring", 00:15:46.372 "raid_level": "raid1", 00:15:46.372 "superblock": true, 00:15:46.372 "num_base_bdevs": 3, 00:15:46.372 "num_base_bdevs_discovered": 1, 00:15:46.372 "num_base_bdevs_operational": 3, 00:15:46.372 "base_bdevs_list": [ 00:15:46.372 { 00:15:46.372 "name": "pt1", 00:15:46.372 "uuid": "6885a03d-b603-5b7b-bbd1-64433bebde57", 00:15:46.372 "is_configured": true, 00:15:46.372 "data_offset": 2048, 00:15:46.372 "data_size": 63488 00:15:46.372 }, 00:15:46.372 { 00:15:46.372 "name": null, 00:15:46.372 "uuid": "89fe78a2-998e-5581-ac18-79cdf528ea53", 00:15:46.372 "is_configured": false, 00:15:46.372 "data_offset": 2048, 00:15:46.372 "data_size": 63488 00:15:46.372 }, 00:15:46.372 { 00:15:46.372 "name": null, 00:15:46.372 "uuid": "34a7be8a-85ce-556c-ad4e-827079b8245b", 00:15:46.372 "is_configured": false, 00:15:46.372 "data_offset": 2048, 00:15:46.372 "data_size": 63488 00:15:46.372 } 00:15:46.372 ] 00:15:46.372 }' 00:15:46.372 04:53:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.372 04:53:00 -- common/autotest_common.sh@10 -- # set +x 00:15:46.940 04:53:01 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:46.940 04:53:01 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.198 [2024-05-15 04:53:01.306106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.198 [2024-05-15 04:53:01.306179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.198 [2024-05-15 04:53:01.306233] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f480 00:15:47.198 [2024-05-15 04:53:01.306251] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.198 [2024-05-15 04:53:01.306618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.198 [2024-05-15 04:53:01.306640] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.198 [2024-05-15 04:53:01.306934] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:47.198 [2024-05-15 04:53:01.306968] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.198 pt2 00:15:47.198 04:53:01 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:47.457 [2024-05-15 04:53:01.522131] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.457 "name": "raid_bdev1", 00:15:47.457 "uuid": "05ddc2a3-b57d-426a-b8ef-8581ab86b5ac", 00:15:47.457 "strip_size_kb": 0, 00:15:47.457 "state": "configuring", 00:15:47.457 "raid_level": "raid1", 00:15:47.457 "superblock": true, 00:15:47.457 "num_base_bdevs": 3, 00:15:47.457 "num_base_bdevs_discovered": 1, 00:15:47.457 "num_base_bdevs_operational": 3, 00:15:47.457 "base_bdevs_list": [ 00:15:47.457 { 00:15:47.457 "name": "pt1", 00:15:47.457 "uuid": "6885a03d-b603-5b7b-bbd1-64433bebde57", 00:15:47.457 "is_configured": true, 00:15:47.457 "data_offset": 2048, 00:15:47.457 "data_size": 63488 00:15:47.457 }, 00:15:47.457 { 00:15:47.457 "name": null, 00:15:47.457 "uuid": "89fe78a2-998e-5581-ac18-79cdf528ea53", 00:15:47.457 "is_configured": false, 00:15:47.457 "data_offset": 2048, 00:15:47.457 "data_size": 63488 00:15:47.457 }, 00:15:47.457 { 00:15:47.457 "name": null, 00:15:47.457 "uuid": "34a7be8a-85ce-556c-ad4e-827079b8245b", 00:15:47.457 "is_configured": false, 00:15:47.457 "data_offset": 2048, 00:15:47.457 "data_size": 63488 00:15:47.457 } 00:15:47.457 ] 00:15:47.457 }' 00:15:47.457 04:53:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.457 04:53:01 -- common/autotest_common.sh@10 -- # set +x 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.394 [2024-05-15 04:53:02.450220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.394 [2024-05-15 04:53:02.450311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.394 [2024-05-15 04:53:02.450361] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030c80 00:15:48.394 [2024-05-15 04:53:02.450400] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.394 [2024-05-15 04:53:02.450999] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.394 [2024-05-15 04:53:02.451044] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.394 [2024-05-15 04:53:02.451147] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:48.394 [2024-05-15 04:53:02.451173] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.394 pt2 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:48.394 [2024-05-15 04:53:02.590240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:48.394 [2024-05-15 04:53:02.590296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.394 [2024-05-15 04:53:02.590330] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032180 00:15:48.394 [2024-05-15 04:53:02.590356] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.394 [2024-05-15 04:53:02.590640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.394 [2024-05-15 04:53:02.590668] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:48.394 [2024-05-15 04:53:02.590816] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:48.394 [2024-05-15 04:53:02.590836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.394 [2024-05-15 04:53:02.590911] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002ee80 00:15:48.394 [2024-05-15 04:53:02.590919] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:48.394 [2024-05-15 04:53:02.591020] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:48.394 [2024-05-15 04:53:02.591214] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002ee80 00:15:48.394 [2024-05-15 04:53:02.591228] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002ee80 00:15:48.394 [2024-05-15 04:53:02.591320] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.394 pt3 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.394 04:53:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.652 04:53:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.652 "name": "raid_bdev1", 00:15:48.652 "uuid": "05ddc2a3-b57d-426a-b8ef-8581ab86b5ac", 00:15:48.652 "strip_size_kb": 0, 00:15:48.652 "state": "online", 00:15:48.652 "raid_level": "raid1", 00:15:48.652 "superblock": true, 00:15:48.652 "num_base_bdevs": 3, 00:15:48.652 "num_base_bdevs_discovered": 3, 00:15:48.652 "num_base_bdevs_operational": 3, 00:15:48.652 "base_bdevs_list": [ 00:15:48.652 { 00:15:48.652 "name": "pt1", 00:15:48.652 "uuid": "6885a03d-b603-5b7b-bbd1-64433bebde57", 00:15:48.652 "is_configured": true, 00:15:48.652 "data_offset": 2048, 00:15:48.652 "data_size": 63488 00:15:48.652 }, 00:15:48.652 { 00:15:48.652 "name": "pt2", 00:15:48.652 "uuid": "89fe78a2-998e-5581-ac18-79cdf528ea53", 00:15:48.652 "is_configured": true, 00:15:48.652 "data_offset": 2048, 00:15:48.652 "data_size": 63488 00:15:48.652 }, 00:15:48.652 { 00:15:48.652 "name": "pt3", 00:15:48.652 "uuid": "34a7be8a-85ce-556c-ad4e-827079b8245b", 00:15:48.652 "is_configured": true, 00:15:48.652 "data_offset": 2048, 00:15:48.652 "data_size": 63488 00:15:48.652 } 00:15:48.652 ] 00:15:48.652 }' 00:15:48.652 04:53:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.652 04:53:02 -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 04:53:03 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:49.219 04:53:03 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:49.477 [2024-05-15 04:53:03.470422] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@430 -- # '[' 05ddc2a3-b57d-426a-b8ef-8581ab86b5ac '!=' 05ddc2a3-b57d-426a-b8ef-8581ab86b5ac ']' 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:49.477 [2024-05-15 04:53:03.690382] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.477 04:53:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.734 04:53:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.734 04:53:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.734 04:53:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.734 "name": "raid_bdev1", 00:15:49.734 "uuid": "05ddc2a3-b57d-426a-b8ef-8581ab86b5ac", 00:15:49.734 "strip_size_kb": 0, 00:15:49.734 "state": "online", 00:15:49.734 "raid_level": "raid1", 00:15:49.734 "superblock": true, 00:15:49.734 "num_base_bdevs": 3, 00:15:49.734 "num_base_bdevs_discovered": 2, 00:15:49.734 "num_base_bdevs_operational": 2, 00:15:49.734 "base_bdevs_list": [ 00:15:49.734 { 00:15:49.734 "name": null, 00:15:49.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.734 "is_configured": false, 00:15:49.734 "data_offset": 2048, 00:15:49.735 "data_size": 63488 00:15:49.735 }, 00:15:49.735 { 00:15:49.735 "name": "pt2", 00:15:49.735 "uuid": "89fe78a2-998e-5581-ac18-79cdf528ea53", 00:15:49.735 "is_configured": true, 00:15:49.735 "data_offset": 2048, 00:15:49.735 "data_size": 63488 00:15:49.735 }, 00:15:49.735 { 00:15:49.735 "name": "pt3", 00:15:49.735 "uuid": "34a7be8a-85ce-556c-ad4e-827079b8245b", 00:15:49.735 "is_configured": true, 00:15:49.735 "data_offset": 2048, 00:15:49.735 "data_size": 63488 00:15:49.735 } 00:15:49.735 ] 00:15:49.735 }' 00:15:49.735 04:53:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.735 04:53:03 -- common/autotest_common.sh@10 -- # set +x 00:15:50.301 04:53:04 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:50.560 [2024-05-15 04:53:04.642429] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.560 [2024-05-15 04:53:04.642457] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.560 [2024-05-15 04:53:04.642520] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.560 [2024-05-15 04:53:04.642565] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.560 [2024-05-15 04:53:04.642574] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002ee80 name raid_bdev1, state offline 00:15:50.560 04:53:04 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:50.560 04:53:04 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.818 04:53:04 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:50.818 04:53:04 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:50.818 04:53:04 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:50.818 04:53:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:50.818 04:53:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:50.818 04:53:05 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:50.818 04:53:05 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:50.818 04:53:05 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:51.077 04:53:05 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:51.077 04:53:05 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:51.077 04:53:05 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:51.077 04:53:05 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:51.077 04:53:05 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.374 [2024-05-15 04:53:05.346510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.374 [2024-05-15 04:53:05.346579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.374 [2024-05-15 04:53:05.346622] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033680 00:15:51.374 [2024-05-15 04:53:05.346641] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.374 [2024-05-15 04:53:05.348579] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.374 [2024-05-15 04:53:05.348617] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.374 [2024-05-15 04:53:05.348714] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:51.374 [2024-05-15 04:53:05.348782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.374 pt2 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.374 "name": "raid_bdev1", 00:15:51.374 "uuid": "05ddc2a3-b57d-426a-b8ef-8581ab86b5ac", 00:15:51.374 "strip_size_kb": 0, 00:15:51.374 "state": "configuring", 00:15:51.374 "raid_level": "raid1", 00:15:51.374 "superblock": true, 00:15:51.374 "num_base_bdevs": 3, 00:15:51.374 "num_base_bdevs_discovered": 1, 00:15:51.374 "num_base_bdevs_operational": 2, 00:15:51.374 "base_bdevs_list": [ 00:15:51.374 { 00:15:51.374 "name": null, 00:15:51.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.374 "is_configured": false, 00:15:51.374 "data_offset": 2048, 00:15:51.374 "data_size": 63488 00:15:51.374 }, 00:15:51.374 { 00:15:51.374 "name": "pt2", 00:15:51.374 "uuid": "89fe78a2-998e-5581-ac18-79cdf528ea53", 00:15:51.374 "is_configured": true, 00:15:51.374 "data_offset": 2048, 00:15:51.374 "data_size": 63488 00:15:51.374 }, 00:15:51.374 { 00:15:51.374 "name": null, 00:15:51.374 "uuid": "34a7be8a-85ce-556c-ad4e-827079b8245b", 00:15:51.374 "is_configured": false, 00:15:51.374 "data_offset": 2048, 00:15:51.374 "data_size": 63488 00:15:51.374 } 00:15:51.374 ] 00:15:51.374 }' 00:15:51.374 04:53:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.374 04:53:05 -- common/autotest_common.sh@10 -- # set +x 00:15:51.946 04:53:06 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:15:51.946 04:53:06 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:51.946 04:53:06 -- bdev/bdev_raid.sh@462 -- # i=2 00:15:51.946 04:53:06 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:52.205 [2024-05-15 04:53:06.270666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:52.205 [2024-05-15 04:53:06.270789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.205 [2024-05-15 04:53:06.270841] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035180 00:15:52.205 [2024-05-15 04:53:06.270866] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.205 [2024-05-15 04:53:06.271486] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.205 [2024-05-15 04:53:06.271519] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:52.205 [2024-05-15 04:53:06.271622] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:52.205 [2024-05-15 04:53:06.271644] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:52.205 [2024-05-15 04:53:06.271732] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000034b80 00:15:52.205 [2024-05-15 04:53:06.271741] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:52.205 [2024-05-15 04:53:06.271826] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:52.205 [2024-05-15 04:53:06.272032] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000034b80 00:15:52.205 [2024-05-15 04:53:06.272042] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000034b80 00:15:52.205 [2024-05-15 04:53:06.272137] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.205 pt3 00:15:52.205 04:53:06 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:52.205 04:53:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:52.205 04:53:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:52.205 04:53:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:52.205 04:53:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:52.205 04:53:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:52.206 04:53:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.206 04:53:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.206 04:53:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.206 04:53:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.206 04:53:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.206 04:53:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.464 04:53:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.464 "name": "raid_bdev1", 00:15:52.464 "uuid": "05ddc2a3-b57d-426a-b8ef-8581ab86b5ac", 00:15:52.464 "strip_size_kb": 0, 00:15:52.464 "state": "online", 00:15:52.464 "raid_level": "raid1", 00:15:52.464 "superblock": true, 00:15:52.464 "num_base_bdevs": 3, 00:15:52.464 "num_base_bdevs_discovered": 2, 00:15:52.464 "num_base_bdevs_operational": 2, 00:15:52.464 "base_bdevs_list": [ 00:15:52.464 { 00:15:52.464 "name": null, 00:15:52.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.464 "is_configured": false, 00:15:52.464 "data_offset": 2048, 00:15:52.464 "data_size": 63488 00:15:52.464 }, 00:15:52.464 { 00:15:52.464 "name": "pt2", 00:15:52.464 "uuid": "89fe78a2-998e-5581-ac18-79cdf528ea53", 00:15:52.464 "is_configured": true, 00:15:52.464 "data_offset": 2048, 00:15:52.464 "data_size": 63488 00:15:52.464 }, 00:15:52.464 { 00:15:52.464 "name": "pt3", 00:15:52.464 "uuid": "34a7be8a-85ce-556c-ad4e-827079b8245b", 00:15:52.464 "is_configured": true, 00:15:52.465 "data_offset": 2048, 00:15:52.465 "data_size": 63488 00:15:52.465 } 00:15:52.465 ] 00:15:52.465 }' 00:15:52.465 04:53:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.465 04:53:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.723 04:53:06 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:15:52.723 04:53:06 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:52.982 [2024-05-15 04:53:07.070696] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.982 [2024-05-15 04:53:07.070733] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.982 [2024-05-15 04:53:07.070793] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.982 [2024-05-15 04:53:07.070833] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.982 [2024-05-15 04:53:07.070842] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000034b80 name raid_bdev1, state offline 00:15:52.982 04:53:07 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.982 04:53:07 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.241 [2024-05-15 04:53:07.430804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.241 [2024-05-15 04:53:07.430882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.241 [2024-05-15 04:53:07.430931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000036680 00:15:53.241 [2024-05-15 04:53:07.430953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.241 [2024-05-15 04:53:07.432682] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.241 [2024-05-15 04:53:07.432718] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:53.241 [2024-05-15 04:53:07.432846] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:53.241 [2024-05-15 04:53:07.432905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:53.241 pt1 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.241 04:53:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.499 04:53:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.500 "name": "raid_bdev1", 00:15:53.500 "uuid": "05ddc2a3-b57d-426a-b8ef-8581ab86b5ac", 00:15:53.500 "strip_size_kb": 0, 00:15:53.500 "state": "configuring", 00:15:53.500 "raid_level": "raid1", 00:15:53.500 "superblock": true, 00:15:53.500 "num_base_bdevs": 3, 00:15:53.500 "num_base_bdevs_discovered": 1, 00:15:53.500 "num_base_bdevs_operational": 3, 00:15:53.500 "base_bdevs_list": [ 00:15:53.500 { 00:15:53.500 "name": "pt1", 00:15:53.500 "uuid": "6885a03d-b603-5b7b-bbd1-64433bebde57", 00:15:53.500 "is_configured": true, 00:15:53.500 "data_offset": 2048, 00:15:53.500 "data_size": 63488 00:15:53.500 }, 00:15:53.500 { 00:15:53.500 "name": null, 00:15:53.500 "uuid": "89fe78a2-998e-5581-ac18-79cdf528ea53", 00:15:53.500 "is_configured": false, 00:15:53.500 "data_offset": 2048, 00:15:53.500 "data_size": 63488 00:15:53.500 }, 00:15:53.500 { 00:15:53.500 "name": null, 00:15:53.500 "uuid": "34a7be8a-85ce-556c-ad4e-827079b8245b", 00:15:53.500 "is_configured": false, 00:15:53.500 "data_offset": 2048, 00:15:53.500 "data_size": 63488 00:15:53.500 } 00:15:53.500 ] 00:15:53.500 }' 00:15:53.500 04:53:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.500 04:53:07 -- common/autotest_common.sh@10 -- # set +x 00:15:54.066 04:53:08 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:15:54.066 04:53:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:15:54.066 04:53:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:54.066 04:53:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:15:54.066 04:53:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:15:54.066 04:53:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:54.325 04:53:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:15:54.325 04:53:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:15:54.325 04:53:08 -- bdev/bdev_raid.sh@489 -- # i=2 00:15:54.325 04:53:08 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:54.325 [2024-05-15 04:53:08.550889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:54.325 [2024-05-15 04:53:08.550965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.325 [2024-05-15 04:53:08.551023] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038180 00:15:54.325 [2024-05-15 04:53:08.551058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.325 [2024-05-15 04:53:08.551434] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.325 [2024-05-15 04:53:08.551464] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:54.325 [2024-05-15 04:53:08.551564] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:54.325 [2024-05-15 04:53:08.551576] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:54.325 [2024-05-15 04:53:08.551586] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.325 [2024-05-15 04:53:08.551601] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000037b80 name raid_bdev1, state configuring 00:15:54.325 [2024-05-15 04:53:08.551677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:54.585 pt3 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.585 "name": "raid_bdev1", 00:15:54.585 "uuid": "05ddc2a3-b57d-426a-b8ef-8581ab86b5ac", 00:15:54.585 "strip_size_kb": 0, 00:15:54.585 "state": "configuring", 00:15:54.585 "raid_level": "raid1", 00:15:54.585 "superblock": true, 00:15:54.585 "num_base_bdevs": 3, 00:15:54.585 "num_base_bdevs_discovered": 1, 00:15:54.585 "num_base_bdevs_operational": 2, 00:15:54.585 "base_bdevs_list": [ 00:15:54.585 { 00:15:54.585 "name": null, 00:15:54.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.585 "is_configured": false, 00:15:54.585 "data_offset": 2048, 00:15:54.585 "data_size": 63488 00:15:54.585 }, 00:15:54.585 { 00:15:54.585 "name": null, 00:15:54.585 "uuid": "89fe78a2-998e-5581-ac18-79cdf528ea53", 00:15:54.585 "is_configured": false, 00:15:54.585 "data_offset": 2048, 00:15:54.585 "data_size": 63488 00:15:54.585 }, 00:15:54.585 { 00:15:54.585 "name": "pt3", 00:15:54.585 "uuid": "34a7be8a-85ce-556c-ad4e-827079b8245b", 00:15:54.585 "is_configured": true, 00:15:54.585 "data_offset": 2048, 00:15:54.585 "data_size": 63488 00:15:54.585 } 00:15:54.585 ] 00:15:54.585 }' 00:15:54.585 04:53:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.585 04:53:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.152 04:53:09 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:15:55.152 04:53:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:15:55.152 04:53:09 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.411 [2024-05-15 04:53:09.410976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.411 [2024-05-15 04:53:09.411052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.411 [2024-05-15 04:53:09.411091] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000039980 00:15:55.411 [2024-05-15 04:53:09.411116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.411 [2024-05-15 04:53:09.411424] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.411 [2024-05-15 04:53:09.411448] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.411 [2024-05-15 04:53:09.411525] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:55.411 [2024-05-15 04:53:09.411544] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.411 [2024-05-15 04:53:09.411617] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000039380 00:15:55.411 [2024-05-15 04:53:09.411625] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.411 [2024-05-15 04:53:09.411709] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:55.411 [2024-05-15 04:53:09.412115] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000039380 00:15:55.411 [2024-05-15 04:53:09.412128] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000039380 00:15:55.411 [2024-05-15 04:53:09.412231] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.411 pt2 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.411 04:53:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.412 04:53:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.412 04:53:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.412 04:53:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.669 04:53:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.669 "name": "raid_bdev1", 00:15:55.669 "uuid": "05ddc2a3-b57d-426a-b8ef-8581ab86b5ac", 00:15:55.669 "strip_size_kb": 0, 00:15:55.669 "state": "online", 00:15:55.669 "raid_level": "raid1", 00:15:55.669 "superblock": true, 00:15:55.669 "num_base_bdevs": 3, 00:15:55.669 "num_base_bdevs_discovered": 2, 00:15:55.669 "num_base_bdevs_operational": 2, 00:15:55.669 "base_bdevs_list": [ 00:15:55.669 { 00:15:55.669 "name": null, 00:15:55.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.669 "is_configured": false, 00:15:55.669 "data_offset": 2048, 00:15:55.669 "data_size": 63488 00:15:55.669 }, 00:15:55.669 { 00:15:55.669 "name": "pt2", 00:15:55.669 "uuid": "89fe78a2-998e-5581-ac18-79cdf528ea53", 00:15:55.669 "is_configured": true, 00:15:55.669 "data_offset": 2048, 00:15:55.669 "data_size": 63488 00:15:55.669 }, 00:15:55.669 { 00:15:55.669 "name": "pt3", 00:15:55.669 "uuid": "34a7be8a-85ce-556c-ad4e-827079b8245b", 00:15:55.669 "is_configured": true, 00:15:55.669 "data_offset": 2048, 00:15:55.669 "data_size": 63488 00:15:55.669 } 00:15:55.669 ] 00:15:55.669 }' 00:15:55.669 04:53:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.669 04:53:09 -- common/autotest_common.sh@10 -- # set +x 00:15:56.237 04:53:10 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.237 04:53:10 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:56.237 [2024-05-15 04:53:10.411270] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.237 04:53:10 -- bdev/bdev_raid.sh@506 -- # '[' 05ddc2a3-b57d-426a-b8ef-8581ab86b5ac '!=' 05ddc2a3-b57d-426a-b8ef-8581ab86b5ac ']' 00:15:56.237 04:53:10 -- bdev/bdev_raid.sh@511 -- # killprocess 52215 00:15:56.237 04:53:10 -- common/autotest_common.sh@926 -- # '[' -z 52215 ']' 00:15:56.237 04:53:10 -- common/autotest_common.sh@930 -- # kill -0 52215 00:15:56.237 04:53:10 -- common/autotest_common.sh@931 -- # uname 00:15:56.237 04:53:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:56.237 04:53:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 52215 00:15:56.237 killing process with pid 52215 00:15:56.237 04:53:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:56.237 04:53:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:56.237 04:53:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52215' 00:15:56.237 04:53:10 -- common/autotest_common.sh@945 -- # kill 52215 00:15:56.237 04:53:10 -- common/autotest_common.sh@950 -- # wait 52215 00:15:56.237 [2024-05-15 04:53:10.459114] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.237 [2024-05-15 04:53:10.459182] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.237 [2024-05-15 04:53:10.459228] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.237 [2024-05-15 04:53:10.459250] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000039380 name raid_bdev1, state offline 00:15:56.805 [2024-05-15 04:53:10.752659] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.181 ************************************ 00:15:58.181 END TEST raid_superblock_test 00:15:58.181 ************************************ 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:58.181 00:15:58.181 real 0m17.264s 00:15:58.181 user 0m30.332s 00:15:58.181 sys 0m2.272s 00:15:58.181 04:53:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.181 04:53:12 -- common/autotest_common.sh@10 -- # set +x 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:15:58.181 04:53:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:58.181 04:53:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:58.181 04:53:12 -- common/autotest_common.sh@10 -- # set +x 00:15:58.181 ************************************ 00:15:58.181 START TEST raid_state_function_test 00:15:58.181 ************************************ 00:15:58.181 04:53:12 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:58.181 Process raid pid: 52807 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=52807 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52807' 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52807 /var/tmp/spdk-raid.sock 00:15:58.181 04:53:12 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:58.181 04:53:12 -- common/autotest_common.sh@819 -- # '[' -z 52807 ']' 00:15:58.181 04:53:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:58.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:58.181 04:53:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:58.181 04:53:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:58.181 04:53:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:58.181 04:53:12 -- common/autotest_common.sh@10 -- # set +x 00:15:58.439 [2024-05-15 04:53:12.420857] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:58.439 [2024-05-15 04:53:12.421091] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.439 [2024-05-15 04:53:12.616701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.697 [2024-05-15 04:53:12.884244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.955 [2024-05-15 04:53:13.142421] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.890 04:53:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:59.890 04:53:13 -- common/autotest_common.sh@852 -- # return 0 00:15:59.890 04:53:13 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:59.890 [2024-05-15 04:53:14.087962] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.890 [2024-05-15 04:53:14.088032] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.890 [2024-05-15 04:53:14.088044] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.890 [2024-05-15 04:53:14.088063] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.890 [2024-05-15 04:53:14.088070] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:59.890 [2024-05-15 04:53:14.088113] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:59.890 [2024-05-15 04:53:14.088120] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:59.890 [2024-05-15 04:53:14.088143] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.890 04:53:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.149 04:53:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.149 "name": "Existed_Raid", 00:16:00.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.149 "strip_size_kb": 64, 00:16:00.149 "state": "configuring", 00:16:00.149 "raid_level": "raid0", 00:16:00.149 "superblock": false, 00:16:00.149 "num_base_bdevs": 4, 00:16:00.149 "num_base_bdevs_discovered": 0, 00:16:00.149 "num_base_bdevs_operational": 4, 00:16:00.149 "base_bdevs_list": [ 00:16:00.149 { 00:16:00.149 "name": "BaseBdev1", 00:16:00.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.149 "is_configured": false, 00:16:00.149 "data_offset": 0, 00:16:00.149 "data_size": 0 00:16:00.149 }, 00:16:00.149 { 00:16:00.149 "name": "BaseBdev2", 00:16:00.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.149 "is_configured": false, 00:16:00.149 "data_offset": 0, 00:16:00.149 "data_size": 0 00:16:00.149 }, 00:16:00.149 { 00:16:00.149 "name": "BaseBdev3", 00:16:00.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.149 "is_configured": false, 00:16:00.149 "data_offset": 0, 00:16:00.149 "data_size": 0 00:16:00.149 }, 00:16:00.149 { 00:16:00.149 "name": "BaseBdev4", 00:16:00.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.149 "is_configured": false, 00:16:00.149 "data_offset": 0, 00:16:00.149 "data_size": 0 00:16:00.149 } 00:16:00.149 ] 00:16:00.149 }' 00:16:00.149 04:53:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.149 04:53:14 -- common/autotest_common.sh@10 -- # set +x 00:16:00.716 04:53:14 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:00.716 [2024-05-15 04:53:14.928007] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.716 [2024-05-15 04:53:14.928050] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:16:00.716 04:53:14 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:00.975 [2024-05-15 04:53:15.128074] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.975 [2024-05-15 04:53:15.128135] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.975 [2024-05-15 04:53:15.128144] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.975 [2024-05-15 04:53:15.128177] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.975 [2024-05-15 04:53:15.128184] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.975 [2024-05-15 04:53:15.128207] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.975 [2024-05-15 04:53:15.128213] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:00.975 [2024-05-15 04:53:15.128236] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:00.975 04:53:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:01.234 BaseBdev1 00:16:01.234 [2024-05-15 04:53:15.313861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.234 04:53:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:01.234 04:53:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:01.234 04:53:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:01.234 04:53:15 -- common/autotest_common.sh@889 -- # local i 00:16:01.234 04:53:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:01.234 04:53:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:01.234 04:53:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.493 04:53:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:01.493 [ 00:16:01.493 { 00:16:01.493 "name": "BaseBdev1", 00:16:01.493 "aliases": [ 00:16:01.493 "443d597e-4113-459e-bf4e-6468c1198886" 00:16:01.493 ], 00:16:01.493 "product_name": "Malloc disk", 00:16:01.493 "block_size": 512, 00:16:01.493 "num_blocks": 65536, 00:16:01.493 "uuid": "443d597e-4113-459e-bf4e-6468c1198886", 00:16:01.493 "assigned_rate_limits": { 00:16:01.493 "rw_ios_per_sec": 0, 00:16:01.493 "rw_mbytes_per_sec": 0, 00:16:01.493 "r_mbytes_per_sec": 0, 00:16:01.493 "w_mbytes_per_sec": 0 00:16:01.493 }, 00:16:01.493 "claimed": true, 00:16:01.493 "claim_type": "exclusive_write", 00:16:01.493 "zoned": false, 00:16:01.493 "supported_io_types": { 00:16:01.493 "read": true, 00:16:01.493 "write": true, 00:16:01.493 "unmap": true, 00:16:01.493 "write_zeroes": true, 00:16:01.493 "flush": true, 00:16:01.493 "reset": true, 00:16:01.493 "compare": false, 00:16:01.493 "compare_and_write": false, 00:16:01.493 "abort": true, 00:16:01.493 "nvme_admin": false, 00:16:01.493 "nvme_io": false 00:16:01.493 }, 00:16:01.493 "memory_domains": [ 00:16:01.493 { 00:16:01.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.493 "dma_device_type": 2 00:16:01.493 } 00:16:01.493 ], 00:16:01.493 "driver_specific": {} 00:16:01.493 } 00:16:01.493 ] 00:16:01.493 04:53:15 -- common/autotest_common.sh@895 -- # return 0 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.493 04:53:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.752 04:53:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.752 "name": "Existed_Raid", 00:16:01.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.752 "strip_size_kb": 64, 00:16:01.752 "state": "configuring", 00:16:01.752 "raid_level": "raid0", 00:16:01.752 "superblock": false, 00:16:01.752 "num_base_bdevs": 4, 00:16:01.752 "num_base_bdevs_discovered": 1, 00:16:01.752 "num_base_bdevs_operational": 4, 00:16:01.752 "base_bdevs_list": [ 00:16:01.752 { 00:16:01.752 "name": "BaseBdev1", 00:16:01.752 "uuid": "443d597e-4113-459e-bf4e-6468c1198886", 00:16:01.752 "is_configured": true, 00:16:01.752 "data_offset": 0, 00:16:01.752 "data_size": 65536 00:16:01.752 }, 00:16:01.752 { 00:16:01.752 "name": "BaseBdev2", 00:16:01.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.752 "is_configured": false, 00:16:01.752 "data_offset": 0, 00:16:01.752 "data_size": 0 00:16:01.752 }, 00:16:01.752 { 00:16:01.752 "name": "BaseBdev3", 00:16:01.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.752 "is_configured": false, 00:16:01.752 "data_offset": 0, 00:16:01.752 "data_size": 0 00:16:01.752 }, 00:16:01.752 { 00:16:01.752 "name": "BaseBdev4", 00:16:01.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.752 "is_configured": false, 00:16:01.752 "data_offset": 0, 00:16:01.752 "data_size": 0 00:16:01.752 } 00:16:01.752 ] 00:16:01.752 }' 00:16:01.752 04:53:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.752 04:53:15 -- common/autotest_common.sh@10 -- # set +x 00:16:02.320 04:53:16 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:02.579 [2024-05-15 04:53:16.557996] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.579 [2024-05-15 04:53:16.558040] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:02.579 [2024-05-15 04:53:16.706109] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.579 [2024-05-15 04:53:16.707414] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.579 [2024-05-15 04:53:16.707485] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.579 [2024-05-15 04:53:16.707505] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.579 [2024-05-15 04:53:16.707527] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.579 [2024-05-15 04:53:16.707535] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:02.579 [2024-05-15 04:53:16.707551] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.579 04:53:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.838 04:53:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.838 "name": "Existed_Raid", 00:16:02.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.838 "strip_size_kb": 64, 00:16:02.838 "state": "configuring", 00:16:02.838 "raid_level": "raid0", 00:16:02.838 "superblock": false, 00:16:02.838 "num_base_bdevs": 4, 00:16:02.838 "num_base_bdevs_discovered": 1, 00:16:02.839 "num_base_bdevs_operational": 4, 00:16:02.839 "base_bdevs_list": [ 00:16:02.839 { 00:16:02.839 "name": "BaseBdev1", 00:16:02.839 "uuid": "443d597e-4113-459e-bf4e-6468c1198886", 00:16:02.839 "is_configured": true, 00:16:02.839 "data_offset": 0, 00:16:02.839 "data_size": 65536 00:16:02.839 }, 00:16:02.839 { 00:16:02.839 "name": "BaseBdev2", 00:16:02.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.839 "is_configured": false, 00:16:02.839 "data_offset": 0, 00:16:02.839 "data_size": 0 00:16:02.839 }, 00:16:02.839 { 00:16:02.839 "name": "BaseBdev3", 00:16:02.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.839 "is_configured": false, 00:16:02.839 "data_offset": 0, 00:16:02.839 "data_size": 0 00:16:02.839 }, 00:16:02.839 { 00:16:02.839 "name": "BaseBdev4", 00:16:02.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.839 "is_configured": false, 00:16:02.839 "data_offset": 0, 00:16:02.839 "data_size": 0 00:16:02.839 } 00:16:02.839 ] 00:16:02.839 }' 00:16:02.839 04:53:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.839 04:53:16 -- common/autotest_common.sh@10 -- # set +x 00:16:03.406 04:53:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:03.665 [2024-05-15 04:53:17.824186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.665 BaseBdev2 00:16:03.665 04:53:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:03.665 04:53:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:03.665 04:53:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:03.665 04:53:17 -- common/autotest_common.sh@889 -- # local i 00:16:03.665 04:53:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:03.665 04:53:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:03.665 04:53:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.923 04:53:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:04.184 [ 00:16:04.184 { 00:16:04.184 "name": "BaseBdev2", 00:16:04.184 "aliases": [ 00:16:04.184 "ea3ad8ad-0b71-44b9-8514-133cea9c3331" 00:16:04.184 ], 00:16:04.184 "product_name": "Malloc disk", 00:16:04.184 "block_size": 512, 00:16:04.184 "num_blocks": 65536, 00:16:04.184 "uuid": "ea3ad8ad-0b71-44b9-8514-133cea9c3331", 00:16:04.184 "assigned_rate_limits": { 00:16:04.184 "rw_ios_per_sec": 0, 00:16:04.184 "rw_mbytes_per_sec": 0, 00:16:04.184 "r_mbytes_per_sec": 0, 00:16:04.184 "w_mbytes_per_sec": 0 00:16:04.184 }, 00:16:04.184 "claimed": true, 00:16:04.184 "claim_type": "exclusive_write", 00:16:04.184 "zoned": false, 00:16:04.184 "supported_io_types": { 00:16:04.184 "read": true, 00:16:04.184 "write": true, 00:16:04.184 "unmap": true, 00:16:04.184 "write_zeroes": true, 00:16:04.184 "flush": true, 00:16:04.184 "reset": true, 00:16:04.184 "compare": false, 00:16:04.184 "compare_and_write": false, 00:16:04.184 "abort": true, 00:16:04.184 "nvme_admin": false, 00:16:04.184 "nvme_io": false 00:16:04.184 }, 00:16:04.184 "memory_domains": [ 00:16:04.184 { 00:16:04.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.184 "dma_device_type": 2 00:16:04.184 } 00:16:04.184 ], 00:16:04.184 "driver_specific": {} 00:16:04.184 } 00:16:04.184 ] 00:16:04.184 04:53:18 -- common/autotest_common.sh@895 -- # return 0 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.184 "name": "Existed_Raid", 00:16:04.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.184 "strip_size_kb": 64, 00:16:04.184 "state": "configuring", 00:16:04.184 "raid_level": "raid0", 00:16:04.184 "superblock": false, 00:16:04.184 "num_base_bdevs": 4, 00:16:04.184 "num_base_bdevs_discovered": 2, 00:16:04.184 "num_base_bdevs_operational": 4, 00:16:04.184 "base_bdevs_list": [ 00:16:04.184 { 00:16:04.184 "name": "BaseBdev1", 00:16:04.184 "uuid": "443d597e-4113-459e-bf4e-6468c1198886", 00:16:04.184 "is_configured": true, 00:16:04.184 "data_offset": 0, 00:16:04.184 "data_size": 65536 00:16:04.184 }, 00:16:04.184 { 00:16:04.184 "name": "BaseBdev2", 00:16:04.184 "uuid": "ea3ad8ad-0b71-44b9-8514-133cea9c3331", 00:16:04.184 "is_configured": true, 00:16:04.184 "data_offset": 0, 00:16:04.184 "data_size": 65536 00:16:04.184 }, 00:16:04.184 { 00:16:04.184 "name": "BaseBdev3", 00:16:04.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.184 "is_configured": false, 00:16:04.184 "data_offset": 0, 00:16:04.184 "data_size": 0 00:16:04.184 }, 00:16:04.184 { 00:16:04.184 "name": "BaseBdev4", 00:16:04.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.184 "is_configured": false, 00:16:04.184 "data_offset": 0, 00:16:04.184 "data_size": 0 00:16:04.184 } 00:16:04.184 ] 00:16:04.184 }' 00:16:04.184 04:53:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.184 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:16:04.811 04:53:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.069 [2024-05-15 04:53:19.148492] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.069 BaseBdev3 00:16:05.069 04:53:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:05.069 04:53:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:05.069 04:53:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:05.069 04:53:19 -- common/autotest_common.sh@889 -- # local i 00:16:05.069 04:53:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:05.069 04:53:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:05.069 04:53:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.327 04:53:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.327 [ 00:16:05.327 { 00:16:05.327 "name": "BaseBdev3", 00:16:05.327 "aliases": [ 00:16:05.327 "b74b60f1-5c6a-49d5-90e7-8061adec265f" 00:16:05.327 ], 00:16:05.327 "product_name": "Malloc disk", 00:16:05.327 "block_size": 512, 00:16:05.327 "num_blocks": 65536, 00:16:05.327 "uuid": "b74b60f1-5c6a-49d5-90e7-8061adec265f", 00:16:05.327 "assigned_rate_limits": { 00:16:05.327 "rw_ios_per_sec": 0, 00:16:05.327 "rw_mbytes_per_sec": 0, 00:16:05.327 "r_mbytes_per_sec": 0, 00:16:05.327 "w_mbytes_per_sec": 0 00:16:05.327 }, 00:16:05.327 "claimed": true, 00:16:05.327 "claim_type": "exclusive_write", 00:16:05.327 "zoned": false, 00:16:05.327 "supported_io_types": { 00:16:05.327 "read": true, 00:16:05.327 "write": true, 00:16:05.327 "unmap": true, 00:16:05.327 "write_zeroes": true, 00:16:05.327 "flush": true, 00:16:05.327 "reset": true, 00:16:05.327 "compare": false, 00:16:05.327 "compare_and_write": false, 00:16:05.327 "abort": true, 00:16:05.327 "nvme_admin": false, 00:16:05.327 "nvme_io": false 00:16:05.327 }, 00:16:05.327 "memory_domains": [ 00:16:05.327 { 00:16:05.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.327 "dma_device_type": 2 00:16:05.327 } 00:16:05.327 ], 00:16:05.327 "driver_specific": {} 00:16:05.327 } 00:16:05.327 ] 00:16:05.327 04:53:19 -- common/autotest_common.sh@895 -- # return 0 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.327 04:53:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.586 04:53:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.586 "name": "Existed_Raid", 00:16:05.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.586 "strip_size_kb": 64, 00:16:05.586 "state": "configuring", 00:16:05.586 "raid_level": "raid0", 00:16:05.586 "superblock": false, 00:16:05.586 "num_base_bdevs": 4, 00:16:05.586 "num_base_bdevs_discovered": 3, 00:16:05.586 "num_base_bdevs_operational": 4, 00:16:05.586 "base_bdevs_list": [ 00:16:05.586 { 00:16:05.586 "name": "BaseBdev1", 00:16:05.586 "uuid": "443d597e-4113-459e-bf4e-6468c1198886", 00:16:05.586 "is_configured": true, 00:16:05.586 "data_offset": 0, 00:16:05.586 "data_size": 65536 00:16:05.586 }, 00:16:05.586 { 00:16:05.586 "name": "BaseBdev2", 00:16:05.586 "uuid": "ea3ad8ad-0b71-44b9-8514-133cea9c3331", 00:16:05.586 "is_configured": true, 00:16:05.586 "data_offset": 0, 00:16:05.586 "data_size": 65536 00:16:05.586 }, 00:16:05.586 { 00:16:05.586 "name": "BaseBdev3", 00:16:05.586 "uuid": "b74b60f1-5c6a-49d5-90e7-8061adec265f", 00:16:05.586 "is_configured": true, 00:16:05.586 "data_offset": 0, 00:16:05.586 "data_size": 65536 00:16:05.586 }, 00:16:05.586 { 00:16:05.586 "name": "BaseBdev4", 00:16:05.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.586 "is_configured": false, 00:16:05.586 "data_offset": 0, 00:16:05.586 "data_size": 0 00:16:05.586 } 00:16:05.586 ] 00:16:05.586 }' 00:16:05.586 04:53:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.586 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:16:06.153 04:53:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:06.411 [2024-05-15 04:53:20.566672] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.411 BaseBdev4 00:16:06.411 [2024-05-15 04:53:20.566931] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:16:06.411 [2024-05-15 04:53:20.566967] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:06.411 [2024-05-15 04:53:20.567096] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:06.411 [2024-05-15 04:53:20.567329] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:16:06.411 [2024-05-15 04:53:20.567339] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:16:06.411 [2024-05-15 04:53:20.567527] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.411 04:53:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:06.411 04:53:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:16:06.411 04:53:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:06.411 04:53:20 -- common/autotest_common.sh@889 -- # local i 00:16:06.411 04:53:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:06.411 04:53:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:06.411 04:53:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.670 04:53:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:06.930 [ 00:16:06.930 { 00:16:06.930 "name": "BaseBdev4", 00:16:06.930 "aliases": [ 00:16:06.930 "a1bb9587-79c4-458a-9b12-952976069d55" 00:16:06.930 ], 00:16:06.930 "product_name": "Malloc disk", 00:16:06.930 "block_size": 512, 00:16:06.930 "num_blocks": 65536, 00:16:06.930 "uuid": "a1bb9587-79c4-458a-9b12-952976069d55", 00:16:06.930 "assigned_rate_limits": { 00:16:06.930 "rw_ios_per_sec": 0, 00:16:06.930 "rw_mbytes_per_sec": 0, 00:16:06.930 "r_mbytes_per_sec": 0, 00:16:06.930 "w_mbytes_per_sec": 0 00:16:06.930 }, 00:16:06.930 "claimed": true, 00:16:06.930 "claim_type": "exclusive_write", 00:16:06.930 "zoned": false, 00:16:06.930 "supported_io_types": { 00:16:06.930 "read": true, 00:16:06.930 "write": true, 00:16:06.930 "unmap": true, 00:16:06.930 "write_zeroes": true, 00:16:06.930 "flush": true, 00:16:06.930 "reset": true, 00:16:06.930 "compare": false, 00:16:06.930 "compare_and_write": false, 00:16:06.930 "abort": true, 00:16:06.930 "nvme_admin": false, 00:16:06.930 "nvme_io": false 00:16:06.930 }, 00:16:06.930 "memory_domains": [ 00:16:06.930 { 00:16:06.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.930 "dma_device_type": 2 00:16:06.930 } 00:16:06.930 ], 00:16:06.930 "driver_specific": {} 00:16:06.930 } 00:16:06.930 ] 00:16:06.930 04:53:20 -- common/autotest_common.sh@895 -- # return 0 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.930 04:53:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.930 04:53:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.930 "name": "Existed_Raid", 00:16:06.930 "uuid": "48b8f310-b00e-4634-aca5-569f1f723cad", 00:16:06.930 "strip_size_kb": 64, 00:16:06.930 "state": "online", 00:16:06.930 "raid_level": "raid0", 00:16:06.930 "superblock": false, 00:16:06.930 "num_base_bdevs": 4, 00:16:06.930 "num_base_bdevs_discovered": 4, 00:16:06.930 "num_base_bdevs_operational": 4, 00:16:06.930 "base_bdevs_list": [ 00:16:06.930 { 00:16:06.930 "name": "BaseBdev1", 00:16:06.930 "uuid": "443d597e-4113-459e-bf4e-6468c1198886", 00:16:06.930 "is_configured": true, 00:16:06.930 "data_offset": 0, 00:16:06.930 "data_size": 65536 00:16:06.930 }, 00:16:06.930 { 00:16:06.930 "name": "BaseBdev2", 00:16:06.930 "uuid": "ea3ad8ad-0b71-44b9-8514-133cea9c3331", 00:16:06.930 "is_configured": true, 00:16:06.930 "data_offset": 0, 00:16:06.930 "data_size": 65536 00:16:06.930 }, 00:16:06.930 { 00:16:06.930 "name": "BaseBdev3", 00:16:06.930 "uuid": "b74b60f1-5c6a-49d5-90e7-8061adec265f", 00:16:06.930 "is_configured": true, 00:16:06.930 "data_offset": 0, 00:16:06.930 "data_size": 65536 00:16:06.930 }, 00:16:06.930 { 00:16:06.930 "name": "BaseBdev4", 00:16:06.930 "uuid": "a1bb9587-79c4-458a-9b12-952976069d55", 00:16:06.930 "is_configured": true, 00:16:06.930 "data_offset": 0, 00:16:06.930 "data_size": 65536 00:16:06.930 } 00:16:06.930 ] 00:16:06.930 }' 00:16:06.930 04:53:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.930 04:53:21 -- common/autotest_common.sh@10 -- # set +x 00:16:07.497 04:53:21 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:07.757 [2024-05-15 04:53:21.846892] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.757 [2024-05-15 04:53:21.846919] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.757 [2024-05-15 04:53:21.846962] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.757 04:53:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.016 04:53:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.016 "name": "Existed_Raid", 00:16:08.016 "uuid": "48b8f310-b00e-4634-aca5-569f1f723cad", 00:16:08.016 "strip_size_kb": 64, 00:16:08.016 "state": "offline", 00:16:08.016 "raid_level": "raid0", 00:16:08.016 "superblock": false, 00:16:08.016 "num_base_bdevs": 4, 00:16:08.016 "num_base_bdevs_discovered": 3, 00:16:08.016 "num_base_bdevs_operational": 3, 00:16:08.016 "base_bdevs_list": [ 00:16:08.016 { 00:16:08.016 "name": null, 00:16:08.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.016 "is_configured": false, 00:16:08.016 "data_offset": 0, 00:16:08.016 "data_size": 65536 00:16:08.016 }, 00:16:08.016 { 00:16:08.016 "name": "BaseBdev2", 00:16:08.016 "uuid": "ea3ad8ad-0b71-44b9-8514-133cea9c3331", 00:16:08.016 "is_configured": true, 00:16:08.016 "data_offset": 0, 00:16:08.016 "data_size": 65536 00:16:08.016 }, 00:16:08.016 { 00:16:08.016 "name": "BaseBdev3", 00:16:08.016 "uuid": "b74b60f1-5c6a-49d5-90e7-8061adec265f", 00:16:08.016 "is_configured": true, 00:16:08.016 "data_offset": 0, 00:16:08.016 "data_size": 65536 00:16:08.016 }, 00:16:08.016 { 00:16:08.016 "name": "BaseBdev4", 00:16:08.016 "uuid": "a1bb9587-79c4-458a-9b12-952976069d55", 00:16:08.016 "is_configured": true, 00:16:08.016 "data_offset": 0, 00:16:08.016 "data_size": 65536 00:16:08.016 } 00:16:08.016 ] 00:16:08.016 }' 00:16:08.016 04:53:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.016 04:53:22 -- common/autotest_common.sh@10 -- # set +x 00:16:08.581 04:53:22 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:08.581 04:53:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:08.581 04:53:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.581 04:53:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:08.839 04:53:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:08.839 04:53:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.839 04:53:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:08.839 [2024-05-15 04:53:23.000151] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.097 04:53:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:09.097 04:53:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:09.097 04:53:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.097 04:53:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:09.097 04:53:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:09.097 04:53:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.097 04:53:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:09.354 [2024-05-15 04:53:23.392254] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.354 04:53:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:09.354 04:53:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:09.354 04:53:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.354 04:53:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:09.612 04:53:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:09.612 04:53:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.612 04:53:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:09.870 [2024-05-15 04:53:23.907955] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:09.870 [2024-05-15 04:53:23.908002] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:16:09.870 04:53:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:09.870 04:53:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:09.870 04:53:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:09.870 04:53:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.127 04:53:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:10.127 04:53:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:10.127 04:53:24 -- bdev/bdev_raid.sh@287 -- # killprocess 52807 00:16:10.127 04:53:24 -- common/autotest_common.sh@926 -- # '[' -z 52807 ']' 00:16:10.127 04:53:24 -- common/autotest_common.sh@930 -- # kill -0 52807 00:16:10.127 04:53:24 -- common/autotest_common.sh@931 -- # uname 00:16:10.127 04:53:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:10.127 04:53:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 52807 00:16:10.127 killing process with pid 52807 00:16:10.127 04:53:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:10.127 04:53:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:10.127 04:53:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52807' 00:16:10.127 04:53:24 -- common/autotest_common.sh@945 -- # kill 52807 00:16:10.127 04:53:24 -- common/autotest_common.sh@950 -- # wait 52807 00:16:10.127 [2024-05-15 04:53:24.252348] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:10.127 [2024-05-15 04:53:24.252472] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:11.501 ************************************ 00:16:11.501 END TEST raid_state_function_test 00:16:11.501 ************************************ 00:16:11.501 04:53:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:11.501 00:16:11.501 real 0m13.437s 00:16:11.501 user 0m22.603s 00:16:11.501 sys 0m1.804s 00:16:11.501 04:53:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.501 04:53:25 -- common/autotest_common.sh@10 -- # set +x 00:16:11.501 04:53:25 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:16:11.501 04:53:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:11.501 04:53:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:11.501 04:53:25 -- common/autotest_common.sh@10 -- # set +x 00:16:11.759 ************************************ 00:16:11.759 START TEST raid_state_function_test_sb 00:16:11.759 ************************************ 00:16:11.759 04:53:25 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:11.759 Process raid pid: 53244 00:16:11.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=53244 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 53244' 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 53244 /var/tmp/spdk-raid.sock 00:16:11.759 04:53:25 -- common/autotest_common.sh@819 -- # '[' -z 53244 ']' 00:16:11.759 04:53:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:11.759 04:53:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:11.759 04:53:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:11.759 04:53:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:11.759 04:53:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:11.759 04:53:25 -- common/autotest_common.sh@10 -- # set +x 00:16:11.759 [2024-05-15 04:53:25.883805] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:11.759 [2024-05-15 04:53:25.883948] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.016 [2024-05-15 04:53:26.052978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.274 [2024-05-15 04:53:26.286995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.532 [2024-05-15 04:53:26.551394] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.465 04:53:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:13.465 04:53:27 -- common/autotest_common.sh@852 -- # return 0 00:16:13.465 04:53:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:13.465 [2024-05-15 04:53:27.479510] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.465 [2024-05-15 04:53:27.479571] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.465 [2024-05-15 04:53:27.479582] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.465 [2024-05-15 04:53:27.479599] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.465 [2024-05-15 04:53:27.479606] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:13.466 [2024-05-15 04:53:27.479654] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:13.466 [2024-05-15 04:53:27.479661] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:13.466 [2024-05-15 04:53:27.479683] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.466 04:53:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.724 04:53:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.724 "name": "Existed_Raid", 00:16:13.724 "uuid": "b6ec9166-4b33-4137-a923-e40147a28188", 00:16:13.724 "strip_size_kb": 64, 00:16:13.724 "state": "configuring", 00:16:13.724 "raid_level": "raid0", 00:16:13.724 "superblock": true, 00:16:13.724 "num_base_bdevs": 4, 00:16:13.724 "num_base_bdevs_discovered": 0, 00:16:13.724 "num_base_bdevs_operational": 4, 00:16:13.724 "base_bdevs_list": [ 00:16:13.724 { 00:16:13.724 "name": "BaseBdev1", 00:16:13.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.724 "is_configured": false, 00:16:13.724 "data_offset": 0, 00:16:13.724 "data_size": 0 00:16:13.724 }, 00:16:13.724 { 00:16:13.724 "name": "BaseBdev2", 00:16:13.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.724 "is_configured": false, 00:16:13.724 "data_offset": 0, 00:16:13.724 "data_size": 0 00:16:13.724 }, 00:16:13.724 { 00:16:13.724 "name": "BaseBdev3", 00:16:13.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.724 "is_configured": false, 00:16:13.724 "data_offset": 0, 00:16:13.724 "data_size": 0 00:16:13.724 }, 00:16:13.724 { 00:16:13.724 "name": "BaseBdev4", 00:16:13.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.724 "is_configured": false, 00:16:13.724 "data_offset": 0, 00:16:13.724 "data_size": 0 00:16:13.724 } 00:16:13.724 ] 00:16:13.724 }' 00:16:13.724 04:53:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.724 04:53:27 -- common/autotest_common.sh@10 -- # set +x 00:16:14.290 04:53:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:14.290 [2024-05-15 04:53:28.427471] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.290 [2024-05-15 04:53:28.427508] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:16:14.290 04:53:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:14.549 [2024-05-15 04:53:28.567614] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.549 [2024-05-15 04:53:28.567681] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.549 [2024-05-15 04:53:28.567697] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.549 [2024-05-15 04:53:28.567965] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.549 [2024-05-15 04:53:28.567990] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.549 [2024-05-15 04:53:28.568031] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:14.549 [2024-05-15 04:53:28.568047] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:14.549 [2024-05-15 04:53:28.568089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:14.549 04:53:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:14.549 BaseBdev1 00:16:14.549 [2024-05-15 04:53:28.755765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.549 04:53:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:14.549 04:53:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:14.549 04:53:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:14.549 04:53:28 -- common/autotest_common.sh@889 -- # local i 00:16:14.549 04:53:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:14.549 04:53:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:14.549 04:53:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.808 04:53:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:15.067 [ 00:16:15.067 { 00:16:15.067 "name": "BaseBdev1", 00:16:15.067 "aliases": [ 00:16:15.067 "03f04614-1f4a-4ed3-863a-0b5090624355" 00:16:15.067 ], 00:16:15.067 "product_name": "Malloc disk", 00:16:15.067 "block_size": 512, 00:16:15.067 "num_blocks": 65536, 00:16:15.067 "uuid": "03f04614-1f4a-4ed3-863a-0b5090624355", 00:16:15.067 "assigned_rate_limits": { 00:16:15.067 "rw_ios_per_sec": 0, 00:16:15.067 "rw_mbytes_per_sec": 0, 00:16:15.067 "r_mbytes_per_sec": 0, 00:16:15.067 "w_mbytes_per_sec": 0 00:16:15.067 }, 00:16:15.067 "claimed": true, 00:16:15.067 "claim_type": "exclusive_write", 00:16:15.067 "zoned": false, 00:16:15.067 "supported_io_types": { 00:16:15.067 "read": true, 00:16:15.067 "write": true, 00:16:15.067 "unmap": true, 00:16:15.067 "write_zeroes": true, 00:16:15.067 "flush": true, 00:16:15.067 "reset": true, 00:16:15.067 "compare": false, 00:16:15.067 "compare_and_write": false, 00:16:15.067 "abort": true, 00:16:15.067 "nvme_admin": false, 00:16:15.067 "nvme_io": false 00:16:15.067 }, 00:16:15.067 "memory_domains": [ 00:16:15.067 { 00:16:15.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.067 "dma_device_type": 2 00:16:15.067 } 00:16:15.067 ], 00:16:15.067 "driver_specific": {} 00:16:15.067 } 00:16:15.067 ] 00:16:15.067 04:53:29 -- common/autotest_common.sh@895 -- # return 0 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:15.067 "name": "Existed_Raid", 00:16:15.067 "uuid": "b65a7b0a-9a44-4ece-88d8-3bdd9be0b448", 00:16:15.067 "strip_size_kb": 64, 00:16:15.067 "state": "configuring", 00:16:15.067 "raid_level": "raid0", 00:16:15.067 "superblock": true, 00:16:15.067 "num_base_bdevs": 4, 00:16:15.067 "num_base_bdevs_discovered": 1, 00:16:15.067 "num_base_bdevs_operational": 4, 00:16:15.067 "base_bdevs_list": [ 00:16:15.067 { 00:16:15.067 "name": "BaseBdev1", 00:16:15.067 "uuid": "03f04614-1f4a-4ed3-863a-0b5090624355", 00:16:15.067 "is_configured": true, 00:16:15.067 "data_offset": 2048, 00:16:15.067 "data_size": 63488 00:16:15.067 }, 00:16:15.067 { 00:16:15.067 "name": "BaseBdev2", 00:16:15.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.067 "is_configured": false, 00:16:15.067 "data_offset": 0, 00:16:15.067 "data_size": 0 00:16:15.067 }, 00:16:15.067 { 00:16:15.067 "name": "BaseBdev3", 00:16:15.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.067 "is_configured": false, 00:16:15.067 "data_offset": 0, 00:16:15.067 "data_size": 0 00:16:15.067 }, 00:16:15.067 { 00:16:15.067 "name": "BaseBdev4", 00:16:15.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.067 "is_configured": false, 00:16:15.067 "data_offset": 0, 00:16:15.067 "data_size": 0 00:16:15.067 } 00:16:15.067 ] 00:16:15.067 }' 00:16:15.067 04:53:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:15.067 04:53:29 -- common/autotest_common.sh@10 -- # set +x 00:16:15.633 04:53:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:15.891 [2024-05-15 04:53:30.055904] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:15.891 [2024-05-15 04:53:30.055957] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:16:15.891 04:53:30 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:15.891 04:53:30 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:16.149 04:53:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:16.407 BaseBdev1 00:16:16.407 04:53:30 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:16.407 04:53:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:16.407 04:53:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:16.407 04:53:30 -- common/autotest_common.sh@889 -- # local i 00:16:16.407 04:53:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:16.407 04:53:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:16.407 04:53:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:16.407 04:53:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:16.666 [ 00:16:16.666 { 00:16:16.666 "name": "BaseBdev1", 00:16:16.666 "aliases": [ 00:16:16.666 "6d5379dd-571c-4862-b16b-498759ca9849" 00:16:16.666 ], 00:16:16.666 "product_name": "Malloc disk", 00:16:16.666 "block_size": 512, 00:16:16.666 "num_blocks": 65536, 00:16:16.666 "uuid": "6d5379dd-571c-4862-b16b-498759ca9849", 00:16:16.666 "assigned_rate_limits": { 00:16:16.666 "rw_ios_per_sec": 0, 00:16:16.666 "rw_mbytes_per_sec": 0, 00:16:16.666 "r_mbytes_per_sec": 0, 00:16:16.666 "w_mbytes_per_sec": 0 00:16:16.666 }, 00:16:16.666 "claimed": false, 00:16:16.666 "zoned": false, 00:16:16.666 "supported_io_types": { 00:16:16.666 "read": true, 00:16:16.666 "write": true, 00:16:16.666 "unmap": true, 00:16:16.666 "write_zeroes": true, 00:16:16.666 "flush": true, 00:16:16.666 "reset": true, 00:16:16.666 "compare": false, 00:16:16.666 "compare_and_write": false, 00:16:16.666 "abort": true, 00:16:16.666 "nvme_admin": false, 00:16:16.666 "nvme_io": false 00:16:16.666 }, 00:16:16.666 "memory_domains": [ 00:16:16.666 { 00:16:16.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.666 "dma_device_type": 2 00:16:16.666 } 00:16:16.666 ], 00:16:16.666 "driver_specific": {} 00:16:16.666 } 00:16:16.666 ] 00:16:16.666 04:53:30 -- common/autotest_common.sh@895 -- # return 0 00:16:16.666 04:53:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:16.666 [2024-05-15 04:53:30.894819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.924 [2024-05-15 04:53:30.896489] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.924 [2024-05-15 04:53:30.896563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.924 [2024-05-15 04:53:30.896574] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.924 [2024-05-15 04:53:30.896598] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.924 [2024-05-15 04:53:30.896606] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:16.924 [2024-05-15 04:53:30.896623] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.924 04:53:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.924 04:53:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.924 "name": "Existed_Raid", 00:16:16.924 "uuid": "d2c9dee7-d86a-4e90-8cc7-e5a4aca1132d", 00:16:16.924 "strip_size_kb": 64, 00:16:16.924 "state": "configuring", 00:16:16.924 "raid_level": "raid0", 00:16:16.924 "superblock": true, 00:16:16.924 "num_base_bdevs": 4, 00:16:16.924 "num_base_bdevs_discovered": 1, 00:16:16.924 "num_base_bdevs_operational": 4, 00:16:16.924 "base_bdevs_list": [ 00:16:16.924 { 00:16:16.924 "name": "BaseBdev1", 00:16:16.924 "uuid": "6d5379dd-571c-4862-b16b-498759ca9849", 00:16:16.924 "is_configured": true, 00:16:16.924 "data_offset": 2048, 00:16:16.924 "data_size": 63488 00:16:16.924 }, 00:16:16.924 { 00:16:16.924 "name": "BaseBdev2", 00:16:16.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.924 "is_configured": false, 00:16:16.924 "data_offset": 0, 00:16:16.924 "data_size": 0 00:16:16.924 }, 00:16:16.924 { 00:16:16.924 "name": "BaseBdev3", 00:16:16.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.924 "is_configured": false, 00:16:16.924 "data_offset": 0, 00:16:16.924 "data_size": 0 00:16:16.924 }, 00:16:16.924 { 00:16:16.924 "name": "BaseBdev4", 00:16:16.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.924 "is_configured": false, 00:16:16.924 "data_offset": 0, 00:16:16.924 "data_size": 0 00:16:16.924 } 00:16:16.924 ] 00:16:16.924 }' 00:16:16.924 04:53:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.924 04:53:31 -- common/autotest_common.sh@10 -- # set +x 00:16:17.490 04:53:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:17.748 [2024-05-15 04:53:31.896441] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.748 BaseBdev2 00:16:17.748 04:53:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:17.748 04:53:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:17.748 04:53:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:17.748 04:53:31 -- common/autotest_common.sh@889 -- # local i 00:16:17.748 04:53:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:17.748 04:53:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:17.748 04:53:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:18.020 04:53:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:18.325 [ 00:16:18.325 { 00:16:18.325 "name": "BaseBdev2", 00:16:18.325 "aliases": [ 00:16:18.325 "b6841a62-ff78-4dc7-abaa-8a69bcb4b499" 00:16:18.325 ], 00:16:18.325 "product_name": "Malloc disk", 00:16:18.325 "block_size": 512, 00:16:18.325 "num_blocks": 65536, 00:16:18.325 "uuid": "b6841a62-ff78-4dc7-abaa-8a69bcb4b499", 00:16:18.325 "assigned_rate_limits": { 00:16:18.325 "rw_ios_per_sec": 0, 00:16:18.325 "rw_mbytes_per_sec": 0, 00:16:18.325 "r_mbytes_per_sec": 0, 00:16:18.325 "w_mbytes_per_sec": 0 00:16:18.325 }, 00:16:18.325 "claimed": true, 00:16:18.325 "claim_type": "exclusive_write", 00:16:18.325 "zoned": false, 00:16:18.325 "supported_io_types": { 00:16:18.325 "read": true, 00:16:18.325 "write": true, 00:16:18.325 "unmap": true, 00:16:18.325 "write_zeroes": true, 00:16:18.326 "flush": true, 00:16:18.326 "reset": true, 00:16:18.326 "compare": false, 00:16:18.326 "compare_and_write": false, 00:16:18.326 "abort": true, 00:16:18.326 "nvme_admin": false, 00:16:18.326 "nvme_io": false 00:16:18.326 }, 00:16:18.326 "memory_domains": [ 00:16:18.326 { 00:16:18.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.326 "dma_device_type": 2 00:16:18.326 } 00:16:18.326 ], 00:16:18.326 "driver_specific": {} 00:16:18.326 } 00:16:18.326 ] 00:16:18.326 04:53:32 -- common/autotest_common.sh@895 -- # return 0 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.326 "name": "Existed_Raid", 00:16:18.326 "uuid": "d2c9dee7-d86a-4e90-8cc7-e5a4aca1132d", 00:16:18.326 "strip_size_kb": 64, 00:16:18.326 "state": "configuring", 00:16:18.326 "raid_level": "raid0", 00:16:18.326 "superblock": true, 00:16:18.326 "num_base_bdevs": 4, 00:16:18.326 "num_base_bdevs_discovered": 2, 00:16:18.326 "num_base_bdevs_operational": 4, 00:16:18.326 "base_bdevs_list": [ 00:16:18.326 { 00:16:18.326 "name": "BaseBdev1", 00:16:18.326 "uuid": "6d5379dd-571c-4862-b16b-498759ca9849", 00:16:18.326 "is_configured": true, 00:16:18.326 "data_offset": 2048, 00:16:18.326 "data_size": 63488 00:16:18.326 }, 00:16:18.326 { 00:16:18.326 "name": "BaseBdev2", 00:16:18.326 "uuid": "b6841a62-ff78-4dc7-abaa-8a69bcb4b499", 00:16:18.326 "is_configured": true, 00:16:18.326 "data_offset": 2048, 00:16:18.326 "data_size": 63488 00:16:18.326 }, 00:16:18.326 { 00:16:18.326 "name": "BaseBdev3", 00:16:18.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.326 "is_configured": false, 00:16:18.326 "data_offset": 0, 00:16:18.326 "data_size": 0 00:16:18.326 }, 00:16:18.326 { 00:16:18.326 "name": "BaseBdev4", 00:16:18.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.326 "is_configured": false, 00:16:18.326 "data_offset": 0, 00:16:18.326 "data_size": 0 00:16:18.326 } 00:16:18.326 ] 00:16:18.326 }' 00:16:18.326 04:53:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.326 04:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:18.899 04:53:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:19.156 [2024-05-15 04:53:33.366102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.156 BaseBdev3 00:16:19.156 04:53:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:19.156 04:53:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:19.156 04:53:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:19.156 04:53:33 -- common/autotest_common.sh@889 -- # local i 00:16:19.156 04:53:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:19.156 04:53:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:19.156 04:53:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.413 04:53:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:19.671 [ 00:16:19.671 { 00:16:19.671 "name": "BaseBdev3", 00:16:19.671 "aliases": [ 00:16:19.671 "e048f10f-827f-4707-a621-5461658bc9b2" 00:16:19.671 ], 00:16:19.671 "product_name": "Malloc disk", 00:16:19.671 "block_size": 512, 00:16:19.671 "num_blocks": 65536, 00:16:19.671 "uuid": "e048f10f-827f-4707-a621-5461658bc9b2", 00:16:19.671 "assigned_rate_limits": { 00:16:19.671 "rw_ios_per_sec": 0, 00:16:19.671 "rw_mbytes_per_sec": 0, 00:16:19.671 "r_mbytes_per_sec": 0, 00:16:19.671 "w_mbytes_per_sec": 0 00:16:19.671 }, 00:16:19.671 "claimed": true, 00:16:19.671 "claim_type": "exclusive_write", 00:16:19.671 "zoned": false, 00:16:19.671 "supported_io_types": { 00:16:19.671 "read": true, 00:16:19.671 "write": true, 00:16:19.671 "unmap": true, 00:16:19.671 "write_zeroes": true, 00:16:19.671 "flush": true, 00:16:19.671 "reset": true, 00:16:19.671 "compare": false, 00:16:19.671 "compare_and_write": false, 00:16:19.671 "abort": true, 00:16:19.671 "nvme_admin": false, 00:16:19.671 "nvme_io": false 00:16:19.671 }, 00:16:19.671 "memory_domains": [ 00:16:19.671 { 00:16:19.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.671 "dma_device_type": 2 00:16:19.671 } 00:16:19.671 ], 00:16:19.671 "driver_specific": {} 00:16:19.671 } 00:16:19.671 ] 00:16:19.671 04:53:33 -- common/autotest_common.sh@895 -- # return 0 00:16:19.671 04:53:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:19.671 04:53:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:19.671 04:53:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.672 "name": "Existed_Raid", 00:16:19.672 "uuid": "d2c9dee7-d86a-4e90-8cc7-e5a4aca1132d", 00:16:19.672 "strip_size_kb": 64, 00:16:19.672 "state": "configuring", 00:16:19.672 "raid_level": "raid0", 00:16:19.672 "superblock": true, 00:16:19.672 "num_base_bdevs": 4, 00:16:19.672 "num_base_bdevs_discovered": 3, 00:16:19.672 "num_base_bdevs_operational": 4, 00:16:19.672 "base_bdevs_list": [ 00:16:19.672 { 00:16:19.672 "name": "BaseBdev1", 00:16:19.672 "uuid": "6d5379dd-571c-4862-b16b-498759ca9849", 00:16:19.672 "is_configured": true, 00:16:19.672 "data_offset": 2048, 00:16:19.672 "data_size": 63488 00:16:19.672 }, 00:16:19.672 { 00:16:19.672 "name": "BaseBdev2", 00:16:19.672 "uuid": "b6841a62-ff78-4dc7-abaa-8a69bcb4b499", 00:16:19.672 "is_configured": true, 00:16:19.672 "data_offset": 2048, 00:16:19.672 "data_size": 63488 00:16:19.672 }, 00:16:19.672 { 00:16:19.672 "name": "BaseBdev3", 00:16:19.672 "uuid": "e048f10f-827f-4707-a621-5461658bc9b2", 00:16:19.672 "is_configured": true, 00:16:19.672 "data_offset": 2048, 00:16:19.672 "data_size": 63488 00:16:19.672 }, 00:16:19.672 { 00:16:19.672 "name": "BaseBdev4", 00:16:19.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.672 "is_configured": false, 00:16:19.672 "data_offset": 0, 00:16:19.672 "data_size": 0 00:16:19.672 } 00:16:19.672 ] 00:16:19.672 }' 00:16:19.672 04:53:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.672 04:53:33 -- common/autotest_common.sh@10 -- # set +x 00:16:20.238 04:53:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:20.496 [2024-05-15 04:53:34.703549] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:20.496 BaseBdev4 00:16:20.496 [2024-05-15 04:53:34.703704] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000029180 00:16:20.496 [2024-05-15 04:53:34.704000] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:20.496 [2024-05-15 04:53:34.704137] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:20.496 [2024-05-15 04:53:34.704379] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000029180 00:16:20.496 [2024-05-15 04:53:34.704389] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000029180 00:16:20.496 [2024-05-15 04:53:34.704489] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.496 04:53:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:20.496 04:53:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:16:20.496 04:53:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:20.496 04:53:34 -- common/autotest_common.sh@889 -- # local i 00:16:20.496 04:53:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:20.496 04:53:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:20.496 04:53:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.753 04:53:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:21.011 [ 00:16:21.011 { 00:16:21.011 "name": "BaseBdev4", 00:16:21.011 "aliases": [ 00:16:21.011 "ef2c2f02-e7ee-48a7-a08f-0371eb6223f6" 00:16:21.011 ], 00:16:21.011 "product_name": "Malloc disk", 00:16:21.011 "block_size": 512, 00:16:21.011 "num_blocks": 65536, 00:16:21.011 "uuid": "ef2c2f02-e7ee-48a7-a08f-0371eb6223f6", 00:16:21.011 "assigned_rate_limits": { 00:16:21.011 "rw_ios_per_sec": 0, 00:16:21.011 "rw_mbytes_per_sec": 0, 00:16:21.011 "r_mbytes_per_sec": 0, 00:16:21.011 "w_mbytes_per_sec": 0 00:16:21.011 }, 00:16:21.011 "claimed": true, 00:16:21.011 "claim_type": "exclusive_write", 00:16:21.011 "zoned": false, 00:16:21.011 "supported_io_types": { 00:16:21.011 "read": true, 00:16:21.011 "write": true, 00:16:21.011 "unmap": true, 00:16:21.011 "write_zeroes": true, 00:16:21.011 "flush": true, 00:16:21.011 "reset": true, 00:16:21.011 "compare": false, 00:16:21.011 "compare_and_write": false, 00:16:21.011 "abort": true, 00:16:21.011 "nvme_admin": false, 00:16:21.011 "nvme_io": false 00:16:21.011 }, 00:16:21.011 "memory_domains": [ 00:16:21.011 { 00:16:21.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.011 "dma_device_type": 2 00:16:21.011 } 00:16:21.011 ], 00:16:21.011 "driver_specific": {} 00:16:21.011 } 00:16:21.011 ] 00:16:21.011 04:53:34 -- common/autotest_common.sh@895 -- # return 0 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.011 04:53:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.011 04:53:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.011 04:53:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.011 04:53:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.011 "name": "Existed_Raid", 00:16:21.011 "uuid": "d2c9dee7-d86a-4e90-8cc7-e5a4aca1132d", 00:16:21.011 "strip_size_kb": 64, 00:16:21.011 "state": "online", 00:16:21.011 "raid_level": "raid0", 00:16:21.011 "superblock": true, 00:16:21.011 "num_base_bdevs": 4, 00:16:21.011 "num_base_bdevs_discovered": 4, 00:16:21.011 "num_base_bdevs_operational": 4, 00:16:21.011 "base_bdevs_list": [ 00:16:21.011 { 00:16:21.011 "name": "BaseBdev1", 00:16:21.011 "uuid": "6d5379dd-571c-4862-b16b-498759ca9849", 00:16:21.011 "is_configured": true, 00:16:21.011 "data_offset": 2048, 00:16:21.011 "data_size": 63488 00:16:21.011 }, 00:16:21.011 { 00:16:21.011 "name": "BaseBdev2", 00:16:21.011 "uuid": "b6841a62-ff78-4dc7-abaa-8a69bcb4b499", 00:16:21.011 "is_configured": true, 00:16:21.011 "data_offset": 2048, 00:16:21.011 "data_size": 63488 00:16:21.011 }, 00:16:21.011 { 00:16:21.011 "name": "BaseBdev3", 00:16:21.011 "uuid": "e048f10f-827f-4707-a621-5461658bc9b2", 00:16:21.011 "is_configured": true, 00:16:21.011 "data_offset": 2048, 00:16:21.011 "data_size": 63488 00:16:21.011 }, 00:16:21.011 { 00:16:21.011 "name": "BaseBdev4", 00:16:21.011 "uuid": "ef2c2f02-e7ee-48a7-a08f-0371eb6223f6", 00:16:21.011 "is_configured": true, 00:16:21.011 "data_offset": 2048, 00:16:21.011 "data_size": 63488 00:16:21.011 } 00:16:21.011 ] 00:16:21.011 }' 00:16:21.011 04:53:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.011 04:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:21.945 04:53:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:21.945 [2024-05-15 04:53:35.979728] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:21.945 [2024-05-15 04:53:35.979769] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.945 [2024-05-15 04:53:35.979809] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.945 04:53:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.203 04:53:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.203 "name": "Existed_Raid", 00:16:22.203 "uuid": "d2c9dee7-d86a-4e90-8cc7-e5a4aca1132d", 00:16:22.203 "strip_size_kb": 64, 00:16:22.203 "state": "offline", 00:16:22.203 "raid_level": "raid0", 00:16:22.203 "superblock": true, 00:16:22.203 "num_base_bdevs": 4, 00:16:22.203 "num_base_bdevs_discovered": 3, 00:16:22.203 "num_base_bdevs_operational": 3, 00:16:22.203 "base_bdevs_list": [ 00:16:22.203 { 00:16:22.203 "name": null, 00:16:22.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.203 "is_configured": false, 00:16:22.203 "data_offset": 2048, 00:16:22.203 "data_size": 63488 00:16:22.203 }, 00:16:22.203 { 00:16:22.203 "name": "BaseBdev2", 00:16:22.203 "uuid": "b6841a62-ff78-4dc7-abaa-8a69bcb4b499", 00:16:22.203 "is_configured": true, 00:16:22.203 "data_offset": 2048, 00:16:22.203 "data_size": 63488 00:16:22.203 }, 00:16:22.203 { 00:16:22.203 "name": "BaseBdev3", 00:16:22.203 "uuid": "e048f10f-827f-4707-a621-5461658bc9b2", 00:16:22.203 "is_configured": true, 00:16:22.203 "data_offset": 2048, 00:16:22.203 "data_size": 63488 00:16:22.203 }, 00:16:22.203 { 00:16:22.203 "name": "BaseBdev4", 00:16:22.203 "uuid": "ef2c2f02-e7ee-48a7-a08f-0371eb6223f6", 00:16:22.203 "is_configured": true, 00:16:22.203 "data_offset": 2048, 00:16:22.203 "data_size": 63488 00:16:22.203 } 00:16:22.203 ] 00:16:22.203 }' 00:16:22.203 04:53:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.203 04:53:36 -- common/autotest_common.sh@10 -- # set +x 00:16:22.769 04:53:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:22.769 04:53:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:22.769 04:53:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.769 04:53:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:23.027 04:53:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:23.027 04:53:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:23.027 04:53:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:23.285 [2024-05-15 04:53:37.347975] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:23.285 04:53:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:23.285 04:53:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:23.285 04:53:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.285 04:53:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:23.543 04:53:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:23.543 04:53:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:23.543 04:53:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:23.802 [2024-05-15 04:53:37.894881] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:23.802 04:53:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:23.802 04:53:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:23.802 04:53:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:23.802 04:53:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.060 04:53:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:24.060 04:53:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.060 04:53:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:24.319 [2024-05-15 04:53:38.363858] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:24.319 [2024-05-15 04:53:38.363900] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029180 name Existed_Raid, state offline 00:16:24.319 04:53:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:24.319 04:53:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:24.319 04:53:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.319 04:53:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:24.577 04:53:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:24.577 04:53:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:24.577 04:53:38 -- bdev/bdev_raid.sh@287 -- # killprocess 53244 00:16:24.577 04:53:38 -- common/autotest_common.sh@926 -- # '[' -z 53244 ']' 00:16:24.577 04:53:38 -- common/autotest_common.sh@930 -- # kill -0 53244 00:16:24.577 04:53:38 -- common/autotest_common.sh@931 -- # uname 00:16:24.578 04:53:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:24.578 04:53:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53244 00:16:24.578 killing process with pid 53244 00:16:24.578 04:53:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:24.578 04:53:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:24.578 04:53:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53244' 00:16:24.578 04:53:38 -- common/autotest_common.sh@945 -- # kill 53244 00:16:24.578 04:53:38 -- common/autotest_common.sh@950 -- # wait 53244 00:16:24.578 [2024-05-15 04:53:38.722852] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.578 [2024-05-15 04:53:38.722958] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.954 04:53:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:25.954 00:16:25.954 real 0m14.415s 00:16:25.954 user 0m24.446s 00:16:25.954 sys 0m1.837s 00:16:25.954 04:53:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.954 04:53:40 -- common/autotest_common.sh@10 -- # set +x 00:16:25.954 ************************************ 00:16:25.954 END TEST raid_state_function_test_sb 00:16:25.954 ************************************ 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:16:26.213 04:53:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:26.213 04:53:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:26.213 04:53:40 -- common/autotest_common.sh@10 -- # set +x 00:16:26.213 ************************************ 00:16:26.213 START TEST raid_superblock_test 00:16:26.213 ************************************ 00:16:26.213 04:53:40 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:26.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@357 -- # raid_pid=53697 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@358 -- # waitforlisten 53697 /var/tmp/spdk-raid.sock 00:16:26.213 04:53:40 -- common/autotest_common.sh@819 -- # '[' -z 53697 ']' 00:16:26.213 04:53:40 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:26.213 04:53:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:26.213 04:53:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:26.213 04:53:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:26.213 04:53:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:26.213 04:53:40 -- common/autotest_common.sh@10 -- # set +x 00:16:26.213 [2024-05-15 04:53:40.370591] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:26.213 [2024-05-15 04:53:40.371199] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53697 ] 00:16:26.471 [2024-05-15 04:53:40.558247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.729 [2024-05-15 04:53:40.835616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.988 [2024-05-15 04:53:41.109016] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.923 04:53:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:27.923 04:53:41 -- common/autotest_common.sh@852 -- # return 0 00:16:27.923 04:53:41 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:27.923 04:53:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:27.923 04:53:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:27.923 04:53:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:27.923 04:53:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:27.923 04:53:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:27.923 04:53:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:27.923 04:53:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:27.923 04:53:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:27.923 malloc1 00:16:27.923 04:53:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:28.182 [2024-05-15 04:53:42.336710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:28.182 [2024-05-15 04:53:42.336804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.182 [2024-05-15 04:53:42.336881] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:16:28.182 [2024-05-15 04:53:42.336922] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.182 [2024-05-15 04:53:42.338757] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.182 [2024-05-15 04:53:42.338797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:28.182 pt1 00:16:28.182 04:53:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:28.182 04:53:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:28.182 04:53:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:28.182 04:53:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:28.182 04:53:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:28.182 04:53:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.182 04:53:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.182 04:53:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.182 04:53:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:28.441 malloc2 00:16:28.441 04:53:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.441 [2024-05-15 04:53:42.668924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.441 [2024-05-15 04:53:42.668993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.441 [2024-05-15 04:53:42.669031] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:16:28.441 [2024-05-15 04:53:42.669064] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.441 [2024-05-15 04:53:42.670606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.441 [2024-05-15 04:53:42.670642] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.441 pt2 00:16:28.699 04:53:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:28.699 04:53:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:28.699 04:53:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:28.699 04:53:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:28.699 04:53:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:28.699 04:53:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.699 04:53:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.699 04:53:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.700 04:53:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:28.700 malloc3 00:16:28.700 04:53:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:28.957 [2024-05-15 04:53:42.999234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:28.958 [2024-05-15 04:53:42.999301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.958 [2024-05-15 04:53:42.999361] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:16:28.958 [2024-05-15 04:53:42.999394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.958 [2024-05-15 04:53:43.000837] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.958 [2024-05-15 04:53:43.000878] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:28.958 pt3 00:16:28.958 04:53:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:28.958 04:53:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:28.958 04:53:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:16:28.958 04:53:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:16:28.958 04:53:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:28.958 04:53:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.958 04:53:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.958 04:53:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.958 04:53:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:29.214 malloc4 00:16:29.214 04:53:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:29.470 [2024-05-15 04:53:43.468328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:29.470 [2024-05-15 04:53:43.468405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.470 [2024-05-15 04:53:43.468460] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ca80 00:16:29.470 [2024-05-15 04:53:43.468511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.470 pt4 00:16:29.470 [2024-05-15 04:53:43.472440] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.470 [2024-05-15 04:53:43.472598] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:29.470 04:53:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:29.470 04:53:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:29.470 04:53:43 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:29.470 [2024-05-15 04:53:43.688935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.470 [2024-05-15 04:53:43.690194] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.470 [2024-05-15 04:53:43.690241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:29.470 [2024-05-15 04:53:43.690291] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:29.470 [2024-05-15 04:53:43.690407] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002df80 00:16:29.470 [2024-05-15 04:53:43.690417] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:29.470 [2024-05-15 04:53:43.690520] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:29.470 [2024-05-15 04:53:43.690714] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002df80 00:16:29.470 [2024-05-15 04:53:43.690725] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002df80 00:16:29.470 [2024-05-15 04:53:43.690833] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.728 "name": "raid_bdev1", 00:16:29.728 "uuid": "81f50def-7954-4f44-9106-16761ffc0ebe", 00:16:29.728 "strip_size_kb": 64, 00:16:29.728 "state": "online", 00:16:29.728 "raid_level": "raid0", 00:16:29.728 "superblock": true, 00:16:29.728 "num_base_bdevs": 4, 00:16:29.728 "num_base_bdevs_discovered": 4, 00:16:29.728 "num_base_bdevs_operational": 4, 00:16:29.728 "base_bdevs_list": [ 00:16:29.728 { 00:16:29.728 "name": "pt1", 00:16:29.728 "uuid": "8e68e8bc-d71a-5285-beea-3adb9fb81c2b", 00:16:29.728 "is_configured": true, 00:16:29.728 "data_offset": 2048, 00:16:29.728 "data_size": 63488 00:16:29.728 }, 00:16:29.728 { 00:16:29.728 "name": "pt2", 00:16:29.728 "uuid": "472100ad-b4bf-50e0-9511-042ff5de19b4", 00:16:29.728 "is_configured": true, 00:16:29.728 "data_offset": 2048, 00:16:29.728 "data_size": 63488 00:16:29.728 }, 00:16:29.728 { 00:16:29.728 "name": "pt3", 00:16:29.728 "uuid": "a0dc13d5-4749-5d72-9e55-45ae9a1b36a3", 00:16:29.728 "is_configured": true, 00:16:29.728 "data_offset": 2048, 00:16:29.728 "data_size": 63488 00:16:29.728 }, 00:16:29.728 { 00:16:29.728 "name": "pt4", 00:16:29.728 "uuid": "4288c73c-b260-5961-8815-46112e01f26d", 00:16:29.728 "is_configured": true, 00:16:29.728 "data_offset": 2048, 00:16:29.728 "data_size": 63488 00:16:29.728 } 00:16:29.728 ] 00:16:29.728 }' 00:16:29.728 04:53:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.728 04:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:30.292 04:53:44 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:30.292 04:53:44 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:30.550 [2024-05-15 04:53:44.553076] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.550 04:53:44 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=81f50def-7954-4f44-9106-16761ffc0ebe 00:16:30.550 04:53:44 -- bdev/bdev_raid.sh@380 -- # '[' -z 81f50def-7954-4f44-9106-16761ffc0ebe ']' 00:16:30.550 04:53:44 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:30.550 [2024-05-15 04:53:44.777005] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.550 [2024-05-15 04:53:44.777034] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.550 [2024-05-15 04:53:44.777109] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.550 [2024-05-15 04:53:44.777157] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.550 [2024-05-15 04:53:44.777166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002df80 name raid_bdev1, state offline 00:16:30.809 04:53:44 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:30.809 04:53:44 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.809 04:53:45 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:30.809 04:53:45 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:30.809 04:53:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.809 04:53:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:31.075 04:53:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.075 04:53:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:31.382 04:53:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.382 04:53:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:31.382 04:53:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.382 04:53:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:31.657 04:53:45 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:31.657 04:53:45 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:31.914 04:53:45 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:31.914 04:53:45 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:31.914 04:53:45 -- common/autotest_common.sh@640 -- # local es=0 00:16:31.914 04:53:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:31.914 04:53:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.914 04:53:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:31.914 04:53:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.914 04:53:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:31.914 04:53:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.914 04:53:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:31.914 04:53:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.914 04:53:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:31.914 04:53:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:31.914 [2024-05-15 04:53:46.049077] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:31.914 [2024-05-15 04:53:46.050384] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:31.914 [2024-05-15 04:53:46.050423] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:31.914 [2024-05-15 04:53:46.050446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:31.914 [2024-05-15 04:53:46.050483] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:31.914 [2024-05-15 04:53:46.050551] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:31.914 [2024-05-15 04:53:46.050584] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:31.914 [2024-05-15 04:53:46.050631] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:16:31.914 [2024-05-15 04:53:46.050654] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.914 [2024-05-15 04:53:46.050666] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002e580 name raid_bdev1, state configuring 00:16:31.914 request: 00:16:31.914 { 00:16:31.914 "name": "raid_bdev1", 00:16:31.914 "raid_level": "raid0", 00:16:31.914 "base_bdevs": [ 00:16:31.914 "malloc1", 00:16:31.914 "malloc2", 00:16:31.914 "malloc3", 00:16:31.914 "malloc4" 00:16:31.914 ], 00:16:31.914 "superblock": false, 00:16:31.914 "strip_size_kb": 64, 00:16:31.914 "method": "bdev_raid_create", 00:16:31.914 "req_id": 1 00:16:31.914 } 00:16:31.914 Got JSON-RPC error response 00:16:31.914 response: 00:16:31.914 { 00:16:31.914 "code": -17, 00:16:31.914 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:31.914 } 00:16:31.914 04:53:46 -- common/autotest_common.sh@643 -- # es=1 00:16:31.914 04:53:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:31.914 04:53:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:31.914 04:53:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:31.914 04:53:46 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.914 04:53:46 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:32.173 04:53:46 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:32.173 04:53:46 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:32.173 04:53:46 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.431 [2024-05-15 04:53:46.413095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.431 [2024-05-15 04:53:46.413160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.431 [2024-05-15 04:53:46.413237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:16:32.431 [2024-05-15 04:53:46.413263] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.431 [2024-05-15 04:53:46.414776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.431 [2024-05-15 04:53:46.414833] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.431 [2024-05-15 04:53:46.414929] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:32.431 [2024-05-15 04:53:46.414987] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.431 pt1 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.431 "name": "raid_bdev1", 00:16:32.431 "uuid": "81f50def-7954-4f44-9106-16761ffc0ebe", 00:16:32.431 "strip_size_kb": 64, 00:16:32.431 "state": "configuring", 00:16:32.431 "raid_level": "raid0", 00:16:32.431 "superblock": true, 00:16:32.431 "num_base_bdevs": 4, 00:16:32.431 "num_base_bdevs_discovered": 1, 00:16:32.431 "num_base_bdevs_operational": 4, 00:16:32.431 "base_bdevs_list": [ 00:16:32.431 { 00:16:32.431 "name": "pt1", 00:16:32.431 "uuid": "8e68e8bc-d71a-5285-beea-3adb9fb81c2b", 00:16:32.431 "is_configured": true, 00:16:32.431 "data_offset": 2048, 00:16:32.431 "data_size": 63488 00:16:32.431 }, 00:16:32.431 { 00:16:32.431 "name": null, 00:16:32.431 "uuid": "472100ad-b4bf-50e0-9511-042ff5de19b4", 00:16:32.431 "is_configured": false, 00:16:32.431 "data_offset": 2048, 00:16:32.431 "data_size": 63488 00:16:32.431 }, 00:16:32.431 { 00:16:32.431 "name": null, 00:16:32.431 "uuid": "a0dc13d5-4749-5d72-9e55-45ae9a1b36a3", 00:16:32.431 "is_configured": false, 00:16:32.431 "data_offset": 2048, 00:16:32.431 "data_size": 63488 00:16:32.431 }, 00:16:32.431 { 00:16:32.431 "name": null, 00:16:32.431 "uuid": "4288c73c-b260-5961-8815-46112e01f26d", 00:16:32.431 "is_configured": false, 00:16:32.431 "data_offset": 2048, 00:16:32.431 "data_size": 63488 00:16:32.431 } 00:16:32.431 ] 00:16:32.431 }' 00:16:32.431 04:53:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.431 04:53:46 -- common/autotest_common.sh@10 -- # set +x 00:16:33.364 04:53:47 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:16:33.364 04:53:47 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.364 [2024-05-15 04:53:47.513240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.364 [2024-05-15 04:53:47.513315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.364 [2024-05-15 04:53:47.513404] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031880 00:16:33.364 [2024-05-15 04:53:47.513432] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.364 [2024-05-15 04:53:47.514007] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.364 [2024-05-15 04:53:47.514062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.364 [2024-05-15 04:53:47.514164] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:33.364 [2024-05-15 04:53:47.514190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.364 pt2 00:16:33.364 04:53:47 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:33.622 [2024-05-15 04:53:47.741258] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.622 04:53:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.881 04:53:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.881 "name": "raid_bdev1", 00:16:33.881 "uuid": "81f50def-7954-4f44-9106-16761ffc0ebe", 00:16:33.881 "strip_size_kb": 64, 00:16:33.881 "state": "configuring", 00:16:33.881 "raid_level": "raid0", 00:16:33.881 "superblock": true, 00:16:33.881 "num_base_bdevs": 4, 00:16:33.881 "num_base_bdevs_discovered": 1, 00:16:33.881 "num_base_bdevs_operational": 4, 00:16:33.881 "base_bdevs_list": [ 00:16:33.881 { 00:16:33.881 "name": "pt1", 00:16:33.881 "uuid": "8e68e8bc-d71a-5285-beea-3adb9fb81c2b", 00:16:33.881 "is_configured": true, 00:16:33.881 "data_offset": 2048, 00:16:33.881 "data_size": 63488 00:16:33.881 }, 00:16:33.881 { 00:16:33.881 "name": null, 00:16:33.881 "uuid": "472100ad-b4bf-50e0-9511-042ff5de19b4", 00:16:33.881 "is_configured": false, 00:16:33.881 "data_offset": 2048, 00:16:33.881 "data_size": 63488 00:16:33.881 }, 00:16:33.881 { 00:16:33.881 "name": null, 00:16:33.881 "uuid": "a0dc13d5-4749-5d72-9e55-45ae9a1b36a3", 00:16:33.881 "is_configured": false, 00:16:33.881 "data_offset": 2048, 00:16:33.881 "data_size": 63488 00:16:33.881 }, 00:16:33.881 { 00:16:33.881 "name": null, 00:16:33.881 "uuid": "4288c73c-b260-5961-8815-46112e01f26d", 00:16:33.881 "is_configured": false, 00:16:33.881 "data_offset": 2048, 00:16:33.881 "data_size": 63488 00:16:33.881 } 00:16:33.881 ] 00:16:33.881 }' 00:16:33.881 04:53:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.881 04:53:47 -- common/autotest_common.sh@10 -- # set +x 00:16:34.448 04:53:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:34.448 04:53:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:34.448 04:53:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:34.707 [2024-05-15 04:53:48.765458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:34.707 [2024-05-15 04:53:48.765548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.707 [2024-05-15 04:53:48.765610] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032d80 00:16:34.707 [2024-05-15 04:53:48.765634] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.707 [2024-05-15 04:53:48.766227] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.707 [2024-05-15 04:53:48.766284] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:34.707 [2024-05-15 04:53:48.766389] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:34.707 [2024-05-15 04:53:48.766414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.707 pt2 00:16:34.707 04:53:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:34.707 04:53:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:34.707 04:53:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:34.707 [2024-05-15 04:53:48.909416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:34.707 [2024-05-15 04:53:48.909474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.707 [2024-05-15 04:53:48.909525] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034280 00:16:34.707 [2024-05-15 04:53:48.909554] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.707 [2024-05-15 04:53:48.910040] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.707 [2024-05-15 04:53:48.910097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:34.707 [2024-05-15 04:53:48.910177] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:34.707 [2024-05-15 04:53:48.910199] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:34.707 pt3 00:16:34.707 04:53:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:34.707 04:53:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:34.707 04:53:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:34.966 [2024-05-15 04:53:49.045435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:34.966 [2024-05-15 04:53:49.045509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.966 [2024-05-15 04:53:49.045547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035780 00:16:34.966 [2024-05-15 04:53:49.045576] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.966 [2024-05-15 04:53:49.046051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.966 [2024-05-15 04:53:49.046105] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:34.966 [2024-05-15 04:53:49.046193] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:16:34.966 [2024-05-15 04:53:49.046212] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:34.966 [2024-05-15 04:53:49.046290] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000031280 00:16:34.966 [2024-05-15 04:53:49.046299] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:34.966 [2024-05-15 04:53:49.046377] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:34.966 [2024-05-15 04:53:49.046600] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000031280 00:16:34.966 [2024-05-15 04:53:49.046611] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000031280 00:16:34.966 [2024-05-15 04:53:49.046711] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.966 pt4 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.966 04:53:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.225 04:53:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.225 "name": "raid_bdev1", 00:16:35.225 "uuid": "81f50def-7954-4f44-9106-16761ffc0ebe", 00:16:35.225 "strip_size_kb": 64, 00:16:35.225 "state": "online", 00:16:35.225 "raid_level": "raid0", 00:16:35.225 "superblock": true, 00:16:35.225 "num_base_bdevs": 4, 00:16:35.225 "num_base_bdevs_discovered": 4, 00:16:35.225 "num_base_bdevs_operational": 4, 00:16:35.225 "base_bdevs_list": [ 00:16:35.225 { 00:16:35.225 "name": "pt1", 00:16:35.225 "uuid": "8e68e8bc-d71a-5285-beea-3adb9fb81c2b", 00:16:35.225 "is_configured": true, 00:16:35.225 "data_offset": 2048, 00:16:35.225 "data_size": 63488 00:16:35.225 }, 00:16:35.225 { 00:16:35.225 "name": "pt2", 00:16:35.225 "uuid": "472100ad-b4bf-50e0-9511-042ff5de19b4", 00:16:35.225 "is_configured": true, 00:16:35.225 "data_offset": 2048, 00:16:35.225 "data_size": 63488 00:16:35.225 }, 00:16:35.225 { 00:16:35.225 "name": "pt3", 00:16:35.225 "uuid": "a0dc13d5-4749-5d72-9e55-45ae9a1b36a3", 00:16:35.225 "is_configured": true, 00:16:35.225 "data_offset": 2048, 00:16:35.225 "data_size": 63488 00:16:35.225 }, 00:16:35.225 { 00:16:35.225 "name": "pt4", 00:16:35.225 "uuid": "4288c73c-b260-5961-8815-46112e01f26d", 00:16:35.225 "is_configured": true, 00:16:35.225 "data_offset": 2048, 00:16:35.225 "data_size": 63488 00:16:35.225 } 00:16:35.225 ] 00:16:35.225 }' 00:16:35.225 04:53:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.225 04:53:49 -- common/autotest_common.sh@10 -- # set +x 00:16:35.791 04:53:49 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:35.792 04:53:49 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:35.792 [2024-05-15 04:53:49.985707] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.792 04:53:50 -- bdev/bdev_raid.sh@430 -- # '[' 81f50def-7954-4f44-9106-16761ffc0ebe '!=' 81f50def-7954-4f44-9106-16761ffc0ebe ']' 00:16:35.792 04:53:50 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:35.792 04:53:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:35.792 04:53:50 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:35.792 04:53:50 -- bdev/bdev_raid.sh@511 -- # killprocess 53697 00:16:35.792 04:53:50 -- common/autotest_common.sh@926 -- # '[' -z 53697 ']' 00:16:35.792 04:53:50 -- common/autotest_common.sh@930 -- # kill -0 53697 00:16:35.792 04:53:50 -- common/autotest_common.sh@931 -- # uname 00:16:35.792 04:53:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:35.792 04:53:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53697 00:16:36.050 killing process with pid 53697 00:16:36.050 04:53:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:36.050 04:53:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:36.050 04:53:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53697' 00:16:36.050 04:53:50 -- common/autotest_common.sh@945 -- # kill 53697 00:16:36.050 04:53:50 -- common/autotest_common.sh@950 -- # wait 53697 00:16:36.050 [2024-05-15 04:53:50.031342] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.050 [2024-05-15 04:53:50.031415] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.050 [2024-05-15 04:53:50.031466] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.050 [2024-05-15 04:53:50.031476] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000031280 name raid_bdev1, state offline 00:16:36.309 [2024-05-15 04:53:50.430093] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.684 04:53:51 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:37.684 00:16:37.684 real 0m11.653s 00:16:37.684 user 0m19.120s 00:16:37.684 sys 0m1.450s 00:16:37.684 04:53:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.684 04:53:51 -- common/autotest_common.sh@10 -- # set +x 00:16:37.684 ************************************ 00:16:37.684 END TEST raid_superblock_test 00:16:37.684 ************************************ 00:16:37.943 04:53:51 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:37.944 04:53:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:37.944 04:53:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:37.944 04:53:51 -- common/autotest_common.sh@10 -- # set +x 00:16:37.944 ************************************ 00:16:37.944 START TEST raid_state_function_test 00:16:37.944 ************************************ 00:16:37.944 04:53:51 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:37.944 Process raid pid: 54025 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=54025 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 54025' 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 54025 /var/tmp/spdk-raid.sock 00:16:37.944 04:53:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:37.944 04:53:51 -- common/autotest_common.sh@819 -- # '[' -z 54025 ']' 00:16:37.944 04:53:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:37.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:37.944 04:53:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:37.944 04:53:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:37.944 04:53:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:37.944 04:53:51 -- common/autotest_common.sh@10 -- # set +x 00:16:37.944 [2024-05-15 04:53:52.083294] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:37.944 [2024-05-15 04:53:52.083516] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.203 [2024-05-15 04:53:52.266812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.461 [2024-05-15 04:53:52.563481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.719 [2024-05-15 04:53:52.837022] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.653 04:53:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:39.653 04:53:53 -- common/autotest_common.sh@852 -- # return 0 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:39.653 [2024-05-15 04:53:53.774649] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.653 [2024-05-15 04:53:53.774932] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.653 [2024-05-15 04:53:53.774960] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.653 [2024-05-15 04:53:53.774992] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.653 [2024-05-15 04:53:53.775005] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.653 [2024-05-15 04:53:53.775074] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.653 [2024-05-15 04:53:53.775087] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:39.653 [2024-05-15 04:53:53.775122] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.653 04:53:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.911 04:53:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.911 "name": "Existed_Raid", 00:16:39.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.911 "strip_size_kb": 64, 00:16:39.911 "state": "configuring", 00:16:39.911 "raid_level": "concat", 00:16:39.911 "superblock": false, 00:16:39.911 "num_base_bdevs": 4, 00:16:39.911 "num_base_bdevs_discovered": 0, 00:16:39.911 "num_base_bdevs_operational": 4, 00:16:39.911 "base_bdevs_list": [ 00:16:39.911 { 00:16:39.911 "name": "BaseBdev1", 00:16:39.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.911 "is_configured": false, 00:16:39.911 "data_offset": 0, 00:16:39.911 "data_size": 0 00:16:39.911 }, 00:16:39.911 { 00:16:39.911 "name": "BaseBdev2", 00:16:39.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.911 "is_configured": false, 00:16:39.911 "data_offset": 0, 00:16:39.911 "data_size": 0 00:16:39.911 }, 00:16:39.911 { 00:16:39.911 "name": "BaseBdev3", 00:16:39.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.911 "is_configured": false, 00:16:39.911 "data_offset": 0, 00:16:39.911 "data_size": 0 00:16:39.911 }, 00:16:39.911 { 00:16:39.911 "name": "BaseBdev4", 00:16:39.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.911 "is_configured": false, 00:16:39.911 "data_offset": 0, 00:16:39.911 "data_size": 0 00:16:39.911 } 00:16:39.911 ] 00:16:39.911 }' 00:16:39.911 04:53:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.911 04:53:54 -- common/autotest_common.sh@10 -- # set +x 00:16:40.477 04:53:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:40.736 [2024-05-15 04:53:54.718550] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.736 [2024-05-15 04:53:54.718586] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:16:40.736 04:53:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:40.736 [2024-05-15 04:53:54.870632] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.736 [2024-05-15 04:53:54.870691] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.736 [2024-05-15 04:53:54.870701] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.736 [2024-05-15 04:53:54.871010] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.736 [2024-05-15 04:53:54.871051] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:40.736 [2024-05-15 04:53:54.871097] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:40.736 [2024-05-15 04:53:54.871119] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:40.736 [2024-05-15 04:53:54.871145] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:40.736 04:53:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:40.995 [2024-05-15 04:53:55.080494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.995 BaseBdev1 00:16:40.995 04:53:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:40.995 04:53:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:40.995 04:53:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:40.995 04:53:55 -- common/autotest_common.sh@889 -- # local i 00:16:40.995 04:53:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:40.995 04:53:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:40.995 04:53:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.253 04:53:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:41.253 [ 00:16:41.253 { 00:16:41.253 "name": "BaseBdev1", 00:16:41.253 "aliases": [ 00:16:41.253 "0dcc430a-01c5-4221-b615-e6c49846dd14" 00:16:41.253 ], 00:16:41.253 "product_name": "Malloc disk", 00:16:41.253 "block_size": 512, 00:16:41.253 "num_blocks": 65536, 00:16:41.253 "uuid": "0dcc430a-01c5-4221-b615-e6c49846dd14", 00:16:41.253 "assigned_rate_limits": { 00:16:41.253 "rw_ios_per_sec": 0, 00:16:41.253 "rw_mbytes_per_sec": 0, 00:16:41.253 "r_mbytes_per_sec": 0, 00:16:41.253 "w_mbytes_per_sec": 0 00:16:41.253 }, 00:16:41.253 "claimed": true, 00:16:41.253 "claim_type": "exclusive_write", 00:16:41.253 "zoned": false, 00:16:41.253 "supported_io_types": { 00:16:41.253 "read": true, 00:16:41.253 "write": true, 00:16:41.253 "unmap": true, 00:16:41.253 "write_zeroes": true, 00:16:41.253 "flush": true, 00:16:41.253 "reset": true, 00:16:41.253 "compare": false, 00:16:41.253 "compare_and_write": false, 00:16:41.253 "abort": true, 00:16:41.253 "nvme_admin": false, 00:16:41.253 "nvme_io": false 00:16:41.253 }, 00:16:41.253 "memory_domains": [ 00:16:41.253 { 00:16:41.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.254 "dma_device_type": 2 00:16:41.254 } 00:16:41.254 ], 00:16:41.254 "driver_specific": {} 00:16:41.254 } 00:16:41.254 ] 00:16:41.254 04:53:55 -- common/autotest_common.sh@895 -- # return 0 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.254 04:53:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.512 04:53:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.512 "name": "Existed_Raid", 00:16:41.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.512 "strip_size_kb": 64, 00:16:41.512 "state": "configuring", 00:16:41.512 "raid_level": "concat", 00:16:41.512 "superblock": false, 00:16:41.512 "num_base_bdevs": 4, 00:16:41.512 "num_base_bdevs_discovered": 1, 00:16:41.512 "num_base_bdevs_operational": 4, 00:16:41.512 "base_bdevs_list": [ 00:16:41.512 { 00:16:41.512 "name": "BaseBdev1", 00:16:41.512 "uuid": "0dcc430a-01c5-4221-b615-e6c49846dd14", 00:16:41.512 "is_configured": true, 00:16:41.512 "data_offset": 0, 00:16:41.512 "data_size": 65536 00:16:41.512 }, 00:16:41.512 { 00:16:41.512 "name": "BaseBdev2", 00:16:41.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.512 "is_configured": false, 00:16:41.512 "data_offset": 0, 00:16:41.512 "data_size": 0 00:16:41.512 }, 00:16:41.512 { 00:16:41.512 "name": "BaseBdev3", 00:16:41.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.512 "is_configured": false, 00:16:41.512 "data_offset": 0, 00:16:41.512 "data_size": 0 00:16:41.512 }, 00:16:41.512 { 00:16:41.512 "name": "BaseBdev4", 00:16:41.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.512 "is_configured": false, 00:16:41.512 "data_offset": 0, 00:16:41.512 "data_size": 0 00:16:41.512 } 00:16:41.512 ] 00:16:41.512 }' 00:16:41.512 04:53:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.512 04:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:42.078 04:53:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:42.338 [2024-05-15 04:53:56.384592] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.338 [2024-05-15 04:53:56.384645] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:16:42.338 04:53:56 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:42.338 04:53:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:42.597 [2024-05-15 04:53:56.612696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.597 [2024-05-15 04:53:56.613976] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.597 [2024-05-15 04:53:56.614058] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.597 [2024-05-15 04:53:56.614080] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.597 [2024-05-15 04:53:56.614104] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.597 [2024-05-15 04:53:56.614114] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:42.597 [2024-05-15 04:53:56.614132] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.597 04:53:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.597 "name": "Existed_Raid", 00:16:42.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.597 "strip_size_kb": 64, 00:16:42.597 "state": "configuring", 00:16:42.597 "raid_level": "concat", 00:16:42.597 "superblock": false, 00:16:42.597 "num_base_bdevs": 4, 00:16:42.597 "num_base_bdevs_discovered": 1, 00:16:42.597 "num_base_bdevs_operational": 4, 00:16:42.597 "base_bdevs_list": [ 00:16:42.597 { 00:16:42.597 "name": "BaseBdev1", 00:16:42.597 "uuid": "0dcc430a-01c5-4221-b615-e6c49846dd14", 00:16:42.597 "is_configured": true, 00:16:42.597 "data_offset": 0, 00:16:42.597 "data_size": 65536 00:16:42.597 }, 00:16:42.597 { 00:16:42.598 "name": "BaseBdev2", 00:16:42.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.598 "is_configured": false, 00:16:42.598 "data_offset": 0, 00:16:42.598 "data_size": 0 00:16:42.598 }, 00:16:42.598 { 00:16:42.598 "name": "BaseBdev3", 00:16:42.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.598 "is_configured": false, 00:16:42.598 "data_offset": 0, 00:16:42.598 "data_size": 0 00:16:42.598 }, 00:16:42.598 { 00:16:42.598 "name": "BaseBdev4", 00:16:42.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.598 "is_configured": false, 00:16:42.598 "data_offset": 0, 00:16:42.598 "data_size": 0 00:16:42.598 } 00:16:42.598 ] 00:16:42.598 }' 00:16:42.598 04:53:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.598 04:53:56 -- common/autotest_common.sh@10 -- # set +x 00:16:43.533 04:53:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.534 [2024-05-15 04:53:57.626671] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.534 BaseBdev2 00:16:43.534 04:53:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:43.534 04:53:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:43.534 04:53:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:43.534 04:53:57 -- common/autotest_common.sh@889 -- # local i 00:16:43.534 04:53:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:43.534 04:53:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:43.534 04:53:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.793 04:53:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.793 [ 00:16:43.793 { 00:16:43.793 "name": "BaseBdev2", 00:16:43.793 "aliases": [ 00:16:43.793 "eaf4eacc-5bc3-4eb4-b0ac-223ef5e08ff0" 00:16:43.793 ], 00:16:43.793 "product_name": "Malloc disk", 00:16:43.793 "block_size": 512, 00:16:43.793 "num_blocks": 65536, 00:16:43.793 "uuid": "eaf4eacc-5bc3-4eb4-b0ac-223ef5e08ff0", 00:16:43.793 "assigned_rate_limits": { 00:16:43.793 "rw_ios_per_sec": 0, 00:16:43.793 "rw_mbytes_per_sec": 0, 00:16:43.793 "r_mbytes_per_sec": 0, 00:16:43.793 "w_mbytes_per_sec": 0 00:16:43.793 }, 00:16:43.793 "claimed": true, 00:16:43.793 "claim_type": "exclusive_write", 00:16:43.793 "zoned": false, 00:16:43.793 "supported_io_types": { 00:16:43.793 "read": true, 00:16:43.793 "write": true, 00:16:43.793 "unmap": true, 00:16:43.793 "write_zeroes": true, 00:16:43.793 "flush": true, 00:16:43.793 "reset": true, 00:16:43.793 "compare": false, 00:16:43.793 "compare_and_write": false, 00:16:43.793 "abort": true, 00:16:43.793 "nvme_admin": false, 00:16:43.793 "nvme_io": false 00:16:43.793 }, 00:16:43.793 "memory_domains": [ 00:16:43.793 { 00:16:43.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.793 "dma_device_type": 2 00:16:43.793 } 00:16:43.793 ], 00:16:43.793 "driver_specific": {} 00:16:43.793 } 00:16:43.793 ] 00:16:43.793 04:53:57 -- common/autotest_common.sh@895 -- # return 0 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.793 04:53:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.053 04:53:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.053 "name": "Existed_Raid", 00:16:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.053 "strip_size_kb": 64, 00:16:44.053 "state": "configuring", 00:16:44.053 "raid_level": "concat", 00:16:44.053 "superblock": false, 00:16:44.053 "num_base_bdevs": 4, 00:16:44.053 "num_base_bdevs_discovered": 2, 00:16:44.053 "num_base_bdevs_operational": 4, 00:16:44.053 "base_bdevs_list": [ 00:16:44.053 { 00:16:44.053 "name": "BaseBdev1", 00:16:44.053 "uuid": "0dcc430a-01c5-4221-b615-e6c49846dd14", 00:16:44.053 "is_configured": true, 00:16:44.053 "data_offset": 0, 00:16:44.053 "data_size": 65536 00:16:44.053 }, 00:16:44.053 { 00:16:44.053 "name": "BaseBdev2", 00:16:44.053 "uuid": "eaf4eacc-5bc3-4eb4-b0ac-223ef5e08ff0", 00:16:44.053 "is_configured": true, 00:16:44.053 "data_offset": 0, 00:16:44.053 "data_size": 65536 00:16:44.053 }, 00:16:44.053 { 00:16:44.053 "name": "BaseBdev3", 00:16:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.053 "is_configured": false, 00:16:44.053 "data_offset": 0, 00:16:44.053 "data_size": 0 00:16:44.053 }, 00:16:44.053 { 00:16:44.053 "name": "BaseBdev4", 00:16:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.053 "is_configured": false, 00:16:44.053 "data_offset": 0, 00:16:44.053 "data_size": 0 00:16:44.053 } 00:16:44.053 ] 00:16:44.053 }' 00:16:44.053 04:53:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.053 04:53:58 -- common/autotest_common.sh@10 -- # set +x 00:16:44.619 04:53:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:44.886 BaseBdev3 00:16:44.886 [2024-05-15 04:53:58.916015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.886 04:53:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:44.886 04:53:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:44.886 04:53:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:44.886 04:53:58 -- common/autotest_common.sh@889 -- # local i 00:16:44.886 04:53:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:44.886 04:53:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:44.886 04:53:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.886 04:53:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:45.160 [ 00:16:45.160 { 00:16:45.160 "name": "BaseBdev3", 00:16:45.160 "aliases": [ 00:16:45.160 "201cc6af-9d1d-4fc6-975d-a1a0957affcb" 00:16:45.160 ], 00:16:45.160 "product_name": "Malloc disk", 00:16:45.160 "block_size": 512, 00:16:45.160 "num_blocks": 65536, 00:16:45.160 "uuid": "201cc6af-9d1d-4fc6-975d-a1a0957affcb", 00:16:45.160 "assigned_rate_limits": { 00:16:45.160 "rw_ios_per_sec": 0, 00:16:45.160 "rw_mbytes_per_sec": 0, 00:16:45.160 "r_mbytes_per_sec": 0, 00:16:45.160 "w_mbytes_per_sec": 0 00:16:45.160 }, 00:16:45.160 "claimed": true, 00:16:45.160 "claim_type": "exclusive_write", 00:16:45.160 "zoned": false, 00:16:45.160 "supported_io_types": { 00:16:45.160 "read": true, 00:16:45.160 "write": true, 00:16:45.160 "unmap": true, 00:16:45.160 "write_zeroes": true, 00:16:45.160 "flush": true, 00:16:45.160 "reset": true, 00:16:45.160 "compare": false, 00:16:45.160 "compare_and_write": false, 00:16:45.160 "abort": true, 00:16:45.160 "nvme_admin": false, 00:16:45.160 "nvme_io": false 00:16:45.160 }, 00:16:45.160 "memory_domains": [ 00:16:45.160 { 00:16:45.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.160 "dma_device_type": 2 00:16:45.160 } 00:16:45.160 ], 00:16:45.160 "driver_specific": {} 00:16:45.160 } 00:16:45.160 ] 00:16:45.160 04:53:59 -- common/autotest_common.sh@895 -- # return 0 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.160 04:53:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.419 04:53:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.419 "name": "Existed_Raid", 00:16:45.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.419 "strip_size_kb": 64, 00:16:45.419 "state": "configuring", 00:16:45.419 "raid_level": "concat", 00:16:45.419 "superblock": false, 00:16:45.419 "num_base_bdevs": 4, 00:16:45.419 "num_base_bdevs_discovered": 3, 00:16:45.419 "num_base_bdevs_operational": 4, 00:16:45.419 "base_bdevs_list": [ 00:16:45.419 { 00:16:45.419 "name": "BaseBdev1", 00:16:45.419 "uuid": "0dcc430a-01c5-4221-b615-e6c49846dd14", 00:16:45.419 "is_configured": true, 00:16:45.419 "data_offset": 0, 00:16:45.419 "data_size": 65536 00:16:45.419 }, 00:16:45.419 { 00:16:45.419 "name": "BaseBdev2", 00:16:45.419 "uuid": "eaf4eacc-5bc3-4eb4-b0ac-223ef5e08ff0", 00:16:45.419 "is_configured": true, 00:16:45.419 "data_offset": 0, 00:16:45.419 "data_size": 65536 00:16:45.419 }, 00:16:45.419 { 00:16:45.419 "name": "BaseBdev3", 00:16:45.419 "uuid": "201cc6af-9d1d-4fc6-975d-a1a0957affcb", 00:16:45.419 "is_configured": true, 00:16:45.419 "data_offset": 0, 00:16:45.419 "data_size": 65536 00:16:45.419 }, 00:16:45.419 { 00:16:45.419 "name": "BaseBdev4", 00:16:45.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.419 "is_configured": false, 00:16:45.419 "data_offset": 0, 00:16:45.419 "data_size": 0 00:16:45.419 } 00:16:45.419 ] 00:16:45.419 }' 00:16:45.419 04:53:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.419 04:53:59 -- common/autotest_common.sh@10 -- # set +x 00:16:45.987 04:54:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:46.245 [2024-05-15 04:54:00.258525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.245 [2024-05-15 04:54:00.258580] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:16:46.245 [2024-05-15 04:54:00.258589] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:46.245 [2024-05-15 04:54:00.258683] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:46.245 BaseBdev4 00:16:46.245 [2024-05-15 04:54:00.259241] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:16:46.245 [2024-05-15 04:54:00.259260] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:16:46.245 [2024-05-15 04:54:00.259476] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.245 04:54:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:46.245 04:54:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:16:46.245 04:54:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:46.245 04:54:00 -- common/autotest_common.sh@889 -- # local i 00:16:46.245 04:54:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:46.245 04:54:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:46.245 04:54:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:46.245 04:54:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:46.504 [ 00:16:46.504 { 00:16:46.504 "name": "BaseBdev4", 00:16:46.504 "aliases": [ 00:16:46.504 "e274e75f-aadc-44ee-8f05-cd12fcde6975" 00:16:46.504 ], 00:16:46.504 "product_name": "Malloc disk", 00:16:46.504 "block_size": 512, 00:16:46.504 "num_blocks": 65536, 00:16:46.504 "uuid": "e274e75f-aadc-44ee-8f05-cd12fcde6975", 00:16:46.504 "assigned_rate_limits": { 00:16:46.504 "rw_ios_per_sec": 0, 00:16:46.504 "rw_mbytes_per_sec": 0, 00:16:46.504 "r_mbytes_per_sec": 0, 00:16:46.504 "w_mbytes_per_sec": 0 00:16:46.504 }, 00:16:46.504 "claimed": true, 00:16:46.504 "claim_type": "exclusive_write", 00:16:46.504 "zoned": false, 00:16:46.504 "supported_io_types": { 00:16:46.504 "read": true, 00:16:46.504 "write": true, 00:16:46.504 "unmap": true, 00:16:46.504 "write_zeroes": true, 00:16:46.504 "flush": true, 00:16:46.504 "reset": true, 00:16:46.504 "compare": false, 00:16:46.504 "compare_and_write": false, 00:16:46.504 "abort": true, 00:16:46.504 "nvme_admin": false, 00:16:46.504 "nvme_io": false 00:16:46.504 }, 00:16:46.504 "memory_domains": [ 00:16:46.504 { 00:16:46.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.504 "dma_device_type": 2 00:16:46.504 } 00:16:46.504 ], 00:16:46.504 "driver_specific": {} 00:16:46.504 } 00:16:46.504 ] 00:16:46.504 04:54:00 -- common/autotest_common.sh@895 -- # return 0 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.504 04:54:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.763 04:54:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.763 "name": "Existed_Raid", 00:16:46.763 "uuid": "1d20ecab-2623-4482-95be-2ce8d5dc3627", 00:16:46.763 "strip_size_kb": 64, 00:16:46.763 "state": "online", 00:16:46.763 "raid_level": "concat", 00:16:46.763 "superblock": false, 00:16:46.763 "num_base_bdevs": 4, 00:16:46.763 "num_base_bdevs_discovered": 4, 00:16:46.763 "num_base_bdevs_operational": 4, 00:16:46.763 "base_bdevs_list": [ 00:16:46.763 { 00:16:46.763 "name": "BaseBdev1", 00:16:46.763 "uuid": "0dcc430a-01c5-4221-b615-e6c49846dd14", 00:16:46.763 "is_configured": true, 00:16:46.763 "data_offset": 0, 00:16:46.763 "data_size": 65536 00:16:46.763 }, 00:16:46.763 { 00:16:46.763 "name": "BaseBdev2", 00:16:46.763 "uuid": "eaf4eacc-5bc3-4eb4-b0ac-223ef5e08ff0", 00:16:46.763 "is_configured": true, 00:16:46.763 "data_offset": 0, 00:16:46.763 "data_size": 65536 00:16:46.763 }, 00:16:46.763 { 00:16:46.763 "name": "BaseBdev3", 00:16:46.763 "uuid": "201cc6af-9d1d-4fc6-975d-a1a0957affcb", 00:16:46.763 "is_configured": true, 00:16:46.763 "data_offset": 0, 00:16:46.763 "data_size": 65536 00:16:46.763 }, 00:16:46.763 { 00:16:46.763 "name": "BaseBdev4", 00:16:46.764 "uuid": "e274e75f-aadc-44ee-8f05-cd12fcde6975", 00:16:46.764 "is_configured": true, 00:16:46.764 "data_offset": 0, 00:16:46.764 "data_size": 65536 00:16:46.764 } 00:16:46.764 ] 00:16:46.764 }' 00:16:46.764 04:54:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.764 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:16:47.331 04:54:01 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:47.331 [2024-05-15 04:54:01.506720] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.331 [2024-05-15 04:54:01.506775] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.331 [2024-05-15 04:54:01.506828] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.589 04:54:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.848 04:54:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.848 "name": "Existed_Raid", 00:16:47.848 "uuid": "1d20ecab-2623-4482-95be-2ce8d5dc3627", 00:16:47.848 "strip_size_kb": 64, 00:16:47.848 "state": "offline", 00:16:47.848 "raid_level": "concat", 00:16:47.848 "superblock": false, 00:16:47.848 "num_base_bdevs": 4, 00:16:47.848 "num_base_bdevs_discovered": 3, 00:16:47.848 "num_base_bdevs_operational": 3, 00:16:47.848 "base_bdevs_list": [ 00:16:47.848 { 00:16:47.848 "name": null, 00:16:47.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.848 "is_configured": false, 00:16:47.848 "data_offset": 0, 00:16:47.848 "data_size": 65536 00:16:47.848 }, 00:16:47.848 { 00:16:47.848 "name": "BaseBdev2", 00:16:47.848 "uuid": "eaf4eacc-5bc3-4eb4-b0ac-223ef5e08ff0", 00:16:47.848 "is_configured": true, 00:16:47.848 "data_offset": 0, 00:16:47.848 "data_size": 65536 00:16:47.848 }, 00:16:47.848 { 00:16:47.848 "name": "BaseBdev3", 00:16:47.848 "uuid": "201cc6af-9d1d-4fc6-975d-a1a0957affcb", 00:16:47.848 "is_configured": true, 00:16:47.848 "data_offset": 0, 00:16:47.848 "data_size": 65536 00:16:47.848 }, 00:16:47.848 { 00:16:47.848 "name": "BaseBdev4", 00:16:47.848 "uuid": "e274e75f-aadc-44ee-8f05-cd12fcde6975", 00:16:47.848 "is_configured": true, 00:16:47.848 "data_offset": 0, 00:16:47.848 "data_size": 65536 00:16:47.848 } 00:16:47.848 ] 00:16:47.848 }' 00:16:47.848 04:54:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.848 04:54:01 -- common/autotest_common.sh@10 -- # set +x 00:16:48.107 04:54:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:48.107 04:54:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:48.107 04:54:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:48.107 04:54:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.366 04:54:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:48.366 04:54:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.366 04:54:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:48.625 [2024-05-15 04:54:02.654300] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:48.625 04:54:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:48.625 04:54:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:48.625 04:54:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.625 04:54:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:48.884 04:54:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:48.884 04:54:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.884 04:54:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:49.141 [2024-05-15 04:54:03.125037] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:49.141 04:54:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:49.141 04:54:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:49.141 04:54:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.141 04:54:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:49.398 04:54:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:49.398 04:54:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.398 04:54:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:49.657 [2024-05-15 04:54:03.714983] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:49.657 [2024-05-15 04:54:03.715044] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:16:49.657 04:54:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:49.657 04:54:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:49.657 04:54:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.657 04:54:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:49.916 04:54:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:49.916 04:54:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:49.916 04:54:03 -- bdev/bdev_raid.sh@287 -- # killprocess 54025 00:16:49.916 04:54:03 -- common/autotest_common.sh@926 -- # '[' -z 54025 ']' 00:16:49.916 04:54:03 -- common/autotest_common.sh@930 -- # kill -0 54025 00:16:49.916 04:54:03 -- common/autotest_common.sh@931 -- # uname 00:16:49.916 04:54:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:49.916 04:54:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54025 00:16:49.916 killing process with pid 54025 00:16:49.916 04:54:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:49.916 04:54:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:49.916 04:54:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54025' 00:16:49.916 04:54:04 -- common/autotest_common.sh@945 -- # kill 54025 00:16:49.916 04:54:04 -- common/autotest_common.sh@950 -- # wait 54025 00:16:49.916 [2024-05-15 04:54:04.003648] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.916 [2024-05-15 04:54:04.003774] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:51.292 00:16:51.292 real 0m13.501s 00:16:51.292 user 0m22.676s 00:16:51.292 sys 0m1.795s 00:16:51.292 04:54:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.292 04:54:05 -- common/autotest_common.sh@10 -- # set +x 00:16:51.292 ************************************ 00:16:51.292 END TEST raid_state_function_test 00:16:51.292 ************************************ 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:51.292 04:54:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:51.292 04:54:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:51.292 04:54:05 -- common/autotest_common.sh@10 -- # set +x 00:16:51.292 ************************************ 00:16:51.292 START TEST raid_state_function_test_sb 00:16:51.292 ************************************ 00:16:51.292 04:54:05 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.292 Process raid pid: 54466 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@226 -- # raid_pid=54466 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 54466' 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@228 -- # waitforlisten 54466 /var/tmp/spdk-raid.sock 00:16:51.292 04:54:05 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:51.292 04:54:05 -- common/autotest_common.sh@819 -- # '[' -z 54466 ']' 00:16:51.292 04:54:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:51.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:51.292 04:54:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.292 04:54:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:51.292 04:54:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.292 04:54:05 -- common/autotest_common.sh@10 -- # set +x 00:16:51.551 [2024-05-15 04:54:05.647566] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:51.551 [2024-05-15 04:54:05.648010] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.809 [2024-05-15 04:54:05.828304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.067 [2024-05-15 04:54:06.099550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.348 [2024-05-15 04:54:06.365370] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.914 04:54:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.914 04:54:07 -- common/autotest_common.sh@852 -- # return 0 00:16:52.914 04:54:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:53.172 [2024-05-15 04:54:07.191965] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.172 [2024-05-15 04:54:07.192032] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.172 [2024-05-15 04:54:07.192043] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.172 [2024-05-15 04:54:07.192062] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.172 [2024-05-15 04:54:07.192070] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.172 [2024-05-15 04:54:07.192116] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.172 [2024-05-15 04:54:07.192123] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:53.172 [2024-05-15 04:54:07.192146] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.172 "name": "Existed_Raid", 00:16:53.172 "uuid": "437bbad5-7567-4d9c-bda0-236f986e9c9e", 00:16:53.172 "strip_size_kb": 64, 00:16:53.172 "state": "configuring", 00:16:53.172 "raid_level": "concat", 00:16:53.172 "superblock": true, 00:16:53.172 "num_base_bdevs": 4, 00:16:53.172 "num_base_bdevs_discovered": 0, 00:16:53.172 "num_base_bdevs_operational": 4, 00:16:53.172 "base_bdevs_list": [ 00:16:53.172 { 00:16:53.172 "name": "BaseBdev1", 00:16:53.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.172 "is_configured": false, 00:16:53.172 "data_offset": 0, 00:16:53.172 "data_size": 0 00:16:53.172 }, 00:16:53.172 { 00:16:53.172 "name": "BaseBdev2", 00:16:53.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.172 "is_configured": false, 00:16:53.172 "data_offset": 0, 00:16:53.172 "data_size": 0 00:16:53.172 }, 00:16:53.172 { 00:16:53.172 "name": "BaseBdev3", 00:16:53.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.172 "is_configured": false, 00:16:53.172 "data_offset": 0, 00:16:53.172 "data_size": 0 00:16:53.172 }, 00:16:53.172 { 00:16:53.172 "name": "BaseBdev4", 00:16:53.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.172 "is_configured": false, 00:16:53.172 "data_offset": 0, 00:16:53.172 "data_size": 0 00:16:53.172 } 00:16:53.172 ] 00:16:53.172 }' 00:16:53.172 04:54:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.172 04:54:07 -- common/autotest_common.sh@10 -- # set +x 00:16:54.108 04:54:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.108 [2024-05-15 04:54:08.195952] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.108 [2024-05-15 04:54:08.195991] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:16:54.108 04:54:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:54.108 [2024-05-15 04:54:08.336044] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.108 [2024-05-15 04:54:08.336092] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.108 [2024-05-15 04:54:08.336101] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.108 [2024-05-15 04:54:08.336150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.108 [2024-05-15 04:54:08.336158] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.108 [2024-05-15 04:54:08.336181] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.108 [2024-05-15 04:54:08.336188] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:54.108 [2024-05-15 04:54:08.336210] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:54.366 04:54:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:54.366 [2024-05-15 04:54:08.531390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.366 BaseBdev1 00:16:54.366 04:54:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:54.366 04:54:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:54.366 04:54:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:54.366 04:54:08 -- common/autotest_common.sh@889 -- # local i 00:16:54.366 04:54:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:54.366 04:54:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:54.366 04:54:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.624 04:54:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.624 [ 00:16:54.624 { 00:16:54.624 "name": "BaseBdev1", 00:16:54.624 "aliases": [ 00:16:54.624 "c67054f5-e6d5-47e1-9d35-61753420c7f5" 00:16:54.624 ], 00:16:54.624 "product_name": "Malloc disk", 00:16:54.624 "block_size": 512, 00:16:54.624 "num_blocks": 65536, 00:16:54.624 "uuid": "c67054f5-e6d5-47e1-9d35-61753420c7f5", 00:16:54.624 "assigned_rate_limits": { 00:16:54.624 "rw_ios_per_sec": 0, 00:16:54.624 "rw_mbytes_per_sec": 0, 00:16:54.624 "r_mbytes_per_sec": 0, 00:16:54.624 "w_mbytes_per_sec": 0 00:16:54.624 }, 00:16:54.624 "claimed": true, 00:16:54.624 "claim_type": "exclusive_write", 00:16:54.624 "zoned": false, 00:16:54.624 "supported_io_types": { 00:16:54.624 "read": true, 00:16:54.624 "write": true, 00:16:54.624 "unmap": true, 00:16:54.624 "write_zeroes": true, 00:16:54.624 "flush": true, 00:16:54.624 "reset": true, 00:16:54.624 "compare": false, 00:16:54.624 "compare_and_write": false, 00:16:54.624 "abort": true, 00:16:54.624 "nvme_admin": false, 00:16:54.624 "nvme_io": false 00:16:54.624 }, 00:16:54.624 "memory_domains": [ 00:16:54.624 { 00:16:54.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.624 "dma_device_type": 2 00:16:54.624 } 00:16:54.624 ], 00:16:54.624 "driver_specific": {} 00:16:54.624 } 00:16:54.624 ] 00:16:54.624 04:54:08 -- common/autotest_common.sh@895 -- # return 0 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.624 04:54:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.883 04:54:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.883 "name": "Existed_Raid", 00:16:54.883 "uuid": "d66d03b7-28b1-4eb3-8687-7bba24f813b5", 00:16:54.883 "strip_size_kb": 64, 00:16:54.883 "state": "configuring", 00:16:54.883 "raid_level": "concat", 00:16:54.883 "superblock": true, 00:16:54.883 "num_base_bdevs": 4, 00:16:54.883 "num_base_bdevs_discovered": 1, 00:16:54.883 "num_base_bdevs_operational": 4, 00:16:54.883 "base_bdevs_list": [ 00:16:54.883 { 00:16:54.883 "name": "BaseBdev1", 00:16:54.883 "uuid": "c67054f5-e6d5-47e1-9d35-61753420c7f5", 00:16:54.883 "is_configured": true, 00:16:54.883 "data_offset": 2048, 00:16:54.883 "data_size": 63488 00:16:54.883 }, 00:16:54.883 { 00:16:54.883 "name": "BaseBdev2", 00:16:54.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.883 "is_configured": false, 00:16:54.883 "data_offset": 0, 00:16:54.883 "data_size": 0 00:16:54.883 }, 00:16:54.883 { 00:16:54.883 "name": "BaseBdev3", 00:16:54.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.883 "is_configured": false, 00:16:54.883 "data_offset": 0, 00:16:54.883 "data_size": 0 00:16:54.883 }, 00:16:54.883 { 00:16:54.883 "name": "BaseBdev4", 00:16:54.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.883 "is_configured": false, 00:16:54.883 "data_offset": 0, 00:16:54.883 "data_size": 0 00:16:54.883 } 00:16:54.883 ] 00:16:54.883 }' 00:16:54.883 04:54:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.883 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:16:55.449 04:54:09 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:55.708 [2024-05-15 04:54:09.823497] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:55.708 [2024-05-15 04:54:09.823541] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:16:55.708 04:54:09 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:55.708 04:54:09 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:55.966 04:54:10 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.224 BaseBdev1 00:16:56.224 04:54:10 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:56.224 04:54:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:56.224 04:54:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:56.224 04:54:10 -- common/autotest_common.sh@889 -- # local i 00:16:56.224 04:54:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:56.224 04:54:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:56.224 04:54:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.482 04:54:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.482 [ 00:16:56.482 { 00:16:56.482 "name": "BaseBdev1", 00:16:56.482 "aliases": [ 00:16:56.482 "678cb4ca-3a3c-4d35-9bd1-d5d68488b1d7" 00:16:56.482 ], 00:16:56.482 "product_name": "Malloc disk", 00:16:56.482 "block_size": 512, 00:16:56.482 "num_blocks": 65536, 00:16:56.482 "uuid": "678cb4ca-3a3c-4d35-9bd1-d5d68488b1d7", 00:16:56.482 "assigned_rate_limits": { 00:16:56.482 "rw_ios_per_sec": 0, 00:16:56.482 "rw_mbytes_per_sec": 0, 00:16:56.482 "r_mbytes_per_sec": 0, 00:16:56.482 "w_mbytes_per_sec": 0 00:16:56.482 }, 00:16:56.482 "claimed": false, 00:16:56.482 "zoned": false, 00:16:56.482 "supported_io_types": { 00:16:56.482 "read": true, 00:16:56.482 "write": true, 00:16:56.482 "unmap": true, 00:16:56.482 "write_zeroes": true, 00:16:56.482 "flush": true, 00:16:56.482 "reset": true, 00:16:56.482 "compare": false, 00:16:56.482 "compare_and_write": false, 00:16:56.482 "abort": true, 00:16:56.482 "nvme_admin": false, 00:16:56.482 "nvme_io": false 00:16:56.482 }, 00:16:56.482 "memory_domains": [ 00:16:56.482 { 00:16:56.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.482 "dma_device_type": 2 00:16:56.482 } 00:16:56.482 ], 00:16:56.482 "driver_specific": {} 00:16:56.482 } 00:16:56.482 ] 00:16:56.482 04:54:10 -- common/autotest_common.sh@895 -- # return 0 00:16:56.482 04:54:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:56.740 [2024-05-15 04:54:10.810914] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.740 [2024-05-15 04:54:10.812147] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.740 [2024-05-15 04:54:10.812221] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.740 [2024-05-15 04:54:10.812232] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.740 [2024-05-15 04:54:10.812254] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.740 [2024-05-15 04:54:10.812262] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:56.740 [2024-05-15 04:54:10.812279] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.740 04:54:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.998 04:54:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.998 "name": "Existed_Raid", 00:16:56.998 "uuid": "8ee66201-43a8-4779-9273-8932f8fb534e", 00:16:56.998 "strip_size_kb": 64, 00:16:56.998 "state": "configuring", 00:16:56.998 "raid_level": "concat", 00:16:56.998 "superblock": true, 00:16:56.998 "num_base_bdevs": 4, 00:16:56.998 "num_base_bdevs_discovered": 1, 00:16:56.998 "num_base_bdevs_operational": 4, 00:16:56.998 "base_bdevs_list": [ 00:16:56.998 { 00:16:56.998 "name": "BaseBdev1", 00:16:56.998 "uuid": "678cb4ca-3a3c-4d35-9bd1-d5d68488b1d7", 00:16:56.998 "is_configured": true, 00:16:56.998 "data_offset": 2048, 00:16:56.998 "data_size": 63488 00:16:56.998 }, 00:16:56.998 { 00:16:56.998 "name": "BaseBdev2", 00:16:56.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.998 "is_configured": false, 00:16:56.999 "data_offset": 0, 00:16:56.999 "data_size": 0 00:16:56.999 }, 00:16:56.999 { 00:16:56.999 "name": "BaseBdev3", 00:16:56.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.999 "is_configured": false, 00:16:56.999 "data_offset": 0, 00:16:56.999 "data_size": 0 00:16:56.999 }, 00:16:56.999 { 00:16:56.999 "name": "BaseBdev4", 00:16:56.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.999 "is_configured": false, 00:16:56.999 "data_offset": 0, 00:16:56.999 "data_size": 0 00:16:56.999 } 00:16:56.999 ] 00:16:56.999 }' 00:16:56.999 04:54:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.999 04:54:11 -- common/autotest_common.sh@10 -- # set +x 00:16:57.565 04:54:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.565 [2024-05-15 04:54:11.780614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.565 BaseBdev2 00:16:57.565 04:54:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:57.565 04:54:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:57.565 04:54:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:57.565 04:54:11 -- common/autotest_common.sh@889 -- # local i 00:16:57.565 04:54:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:57.565 04:54:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:57.565 04:54:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.825 04:54:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:58.107 [ 00:16:58.107 { 00:16:58.107 "name": "BaseBdev2", 00:16:58.107 "aliases": [ 00:16:58.107 "2935a21d-b099-4cbb-98f7-dba0da41cd27" 00:16:58.107 ], 00:16:58.107 "product_name": "Malloc disk", 00:16:58.107 "block_size": 512, 00:16:58.107 "num_blocks": 65536, 00:16:58.107 "uuid": "2935a21d-b099-4cbb-98f7-dba0da41cd27", 00:16:58.107 "assigned_rate_limits": { 00:16:58.107 "rw_ios_per_sec": 0, 00:16:58.107 "rw_mbytes_per_sec": 0, 00:16:58.107 "r_mbytes_per_sec": 0, 00:16:58.107 "w_mbytes_per_sec": 0 00:16:58.107 }, 00:16:58.107 "claimed": true, 00:16:58.107 "claim_type": "exclusive_write", 00:16:58.107 "zoned": false, 00:16:58.107 "supported_io_types": { 00:16:58.107 "read": true, 00:16:58.107 "write": true, 00:16:58.107 "unmap": true, 00:16:58.107 "write_zeroes": true, 00:16:58.107 "flush": true, 00:16:58.107 "reset": true, 00:16:58.107 "compare": false, 00:16:58.107 "compare_and_write": false, 00:16:58.107 "abort": true, 00:16:58.107 "nvme_admin": false, 00:16:58.107 "nvme_io": false 00:16:58.107 }, 00:16:58.107 "memory_domains": [ 00:16:58.107 { 00:16:58.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.107 "dma_device_type": 2 00:16:58.107 } 00:16:58.107 ], 00:16:58.107 "driver_specific": {} 00:16:58.107 } 00:16:58.107 ] 00:16:58.107 04:54:12 -- common/autotest_common.sh@895 -- # return 0 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.107 04:54:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.386 04:54:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.386 "name": "Existed_Raid", 00:16:58.386 "uuid": "8ee66201-43a8-4779-9273-8932f8fb534e", 00:16:58.386 "strip_size_kb": 64, 00:16:58.386 "state": "configuring", 00:16:58.386 "raid_level": "concat", 00:16:58.386 "superblock": true, 00:16:58.386 "num_base_bdevs": 4, 00:16:58.386 "num_base_bdevs_discovered": 2, 00:16:58.386 "num_base_bdevs_operational": 4, 00:16:58.386 "base_bdevs_list": [ 00:16:58.386 { 00:16:58.386 "name": "BaseBdev1", 00:16:58.386 "uuid": "678cb4ca-3a3c-4d35-9bd1-d5d68488b1d7", 00:16:58.386 "is_configured": true, 00:16:58.386 "data_offset": 2048, 00:16:58.386 "data_size": 63488 00:16:58.386 }, 00:16:58.386 { 00:16:58.386 "name": "BaseBdev2", 00:16:58.386 "uuid": "2935a21d-b099-4cbb-98f7-dba0da41cd27", 00:16:58.386 "is_configured": true, 00:16:58.386 "data_offset": 2048, 00:16:58.386 "data_size": 63488 00:16:58.386 }, 00:16:58.386 { 00:16:58.386 "name": "BaseBdev3", 00:16:58.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.386 "is_configured": false, 00:16:58.386 "data_offset": 0, 00:16:58.386 "data_size": 0 00:16:58.386 }, 00:16:58.386 { 00:16:58.386 "name": "BaseBdev4", 00:16:58.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.386 "is_configured": false, 00:16:58.386 "data_offset": 0, 00:16:58.386 "data_size": 0 00:16:58.386 } 00:16:58.386 ] 00:16:58.386 }' 00:16:58.386 04:54:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.386 04:54:12 -- common/autotest_common.sh@10 -- # set +x 00:16:58.954 04:54:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:58.954 BaseBdev3 00:16:58.954 [2024-05-15 04:54:13.046901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.954 04:54:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:58.954 04:54:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:58.954 04:54:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:58.954 04:54:13 -- common/autotest_common.sh@889 -- # local i 00:16:58.954 04:54:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:58.954 04:54:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:58.954 04:54:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.212 04:54:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:59.212 [ 00:16:59.212 { 00:16:59.212 "name": "BaseBdev3", 00:16:59.212 "aliases": [ 00:16:59.212 "98c8efd0-e35c-419c-a493-2ecc4d216b4a" 00:16:59.212 ], 00:16:59.212 "product_name": "Malloc disk", 00:16:59.212 "block_size": 512, 00:16:59.212 "num_blocks": 65536, 00:16:59.212 "uuid": "98c8efd0-e35c-419c-a493-2ecc4d216b4a", 00:16:59.212 "assigned_rate_limits": { 00:16:59.212 "rw_ios_per_sec": 0, 00:16:59.212 "rw_mbytes_per_sec": 0, 00:16:59.212 "r_mbytes_per_sec": 0, 00:16:59.212 "w_mbytes_per_sec": 0 00:16:59.212 }, 00:16:59.212 "claimed": true, 00:16:59.212 "claim_type": "exclusive_write", 00:16:59.212 "zoned": false, 00:16:59.212 "supported_io_types": { 00:16:59.212 "read": true, 00:16:59.212 "write": true, 00:16:59.212 "unmap": true, 00:16:59.212 "write_zeroes": true, 00:16:59.212 "flush": true, 00:16:59.212 "reset": true, 00:16:59.212 "compare": false, 00:16:59.212 "compare_and_write": false, 00:16:59.212 "abort": true, 00:16:59.212 "nvme_admin": false, 00:16:59.212 "nvme_io": false 00:16:59.212 }, 00:16:59.212 "memory_domains": [ 00:16:59.212 { 00:16:59.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.212 "dma_device_type": 2 00:16:59.212 } 00:16:59.212 ], 00:16:59.212 "driver_specific": {} 00:16:59.212 } 00:16:59.212 ] 00:16:59.212 04:54:13 -- common/autotest_common.sh@895 -- # return 0 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.212 04:54:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.469 04:54:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.469 "name": "Existed_Raid", 00:16:59.469 "uuid": "8ee66201-43a8-4779-9273-8932f8fb534e", 00:16:59.469 "strip_size_kb": 64, 00:16:59.469 "state": "configuring", 00:16:59.469 "raid_level": "concat", 00:16:59.469 "superblock": true, 00:16:59.469 "num_base_bdevs": 4, 00:16:59.469 "num_base_bdevs_discovered": 3, 00:16:59.469 "num_base_bdevs_operational": 4, 00:16:59.469 "base_bdevs_list": [ 00:16:59.469 { 00:16:59.469 "name": "BaseBdev1", 00:16:59.469 "uuid": "678cb4ca-3a3c-4d35-9bd1-d5d68488b1d7", 00:16:59.469 "is_configured": true, 00:16:59.469 "data_offset": 2048, 00:16:59.469 "data_size": 63488 00:16:59.469 }, 00:16:59.469 { 00:16:59.469 "name": "BaseBdev2", 00:16:59.470 "uuid": "2935a21d-b099-4cbb-98f7-dba0da41cd27", 00:16:59.470 "is_configured": true, 00:16:59.470 "data_offset": 2048, 00:16:59.470 "data_size": 63488 00:16:59.470 }, 00:16:59.470 { 00:16:59.470 "name": "BaseBdev3", 00:16:59.470 "uuid": "98c8efd0-e35c-419c-a493-2ecc4d216b4a", 00:16:59.470 "is_configured": true, 00:16:59.470 "data_offset": 2048, 00:16:59.470 "data_size": 63488 00:16:59.470 }, 00:16:59.470 { 00:16:59.470 "name": "BaseBdev4", 00:16:59.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.470 "is_configured": false, 00:16:59.470 "data_offset": 0, 00:16:59.470 "data_size": 0 00:16:59.470 } 00:16:59.470 ] 00:16:59.470 }' 00:16:59.470 04:54:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.470 04:54:13 -- common/autotest_common.sh@10 -- # set +x 00:17:00.036 04:54:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:00.295 BaseBdev4 00:17:00.295 [2024-05-15 04:54:14.365892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:00.295 [2024-05-15 04:54:14.366066] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000029180 00:17:00.295 [2024-05-15 04:54:14.366078] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:00.295 [2024-05-15 04:54:14.366169] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:00.295 [2024-05-15 04:54:14.366373] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000029180 00:17:00.295 [2024-05-15 04:54:14.366383] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000029180 00:17:00.295 [2024-05-15 04:54:14.366473] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.295 04:54:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:00.295 04:54:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:00.295 04:54:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:00.295 04:54:14 -- common/autotest_common.sh@889 -- # local i 00:17:00.295 04:54:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:00.295 04:54:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:00.295 04:54:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.554 04:54:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:00.554 [ 00:17:00.554 { 00:17:00.554 "name": "BaseBdev4", 00:17:00.554 "aliases": [ 00:17:00.554 "04c79981-76ce-4fde-b206-d4e2f8df2514" 00:17:00.554 ], 00:17:00.554 "product_name": "Malloc disk", 00:17:00.554 "block_size": 512, 00:17:00.554 "num_blocks": 65536, 00:17:00.554 "uuid": "04c79981-76ce-4fde-b206-d4e2f8df2514", 00:17:00.554 "assigned_rate_limits": { 00:17:00.554 "rw_ios_per_sec": 0, 00:17:00.554 "rw_mbytes_per_sec": 0, 00:17:00.554 "r_mbytes_per_sec": 0, 00:17:00.554 "w_mbytes_per_sec": 0 00:17:00.554 }, 00:17:00.554 "claimed": true, 00:17:00.554 "claim_type": "exclusive_write", 00:17:00.554 "zoned": false, 00:17:00.554 "supported_io_types": { 00:17:00.554 "read": true, 00:17:00.554 "write": true, 00:17:00.554 "unmap": true, 00:17:00.554 "write_zeroes": true, 00:17:00.554 "flush": true, 00:17:00.554 "reset": true, 00:17:00.554 "compare": false, 00:17:00.554 "compare_and_write": false, 00:17:00.554 "abort": true, 00:17:00.554 "nvme_admin": false, 00:17:00.554 "nvme_io": false 00:17:00.554 }, 00:17:00.554 "memory_domains": [ 00:17:00.554 { 00:17:00.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.554 "dma_device_type": 2 00:17:00.554 } 00:17:00.554 ], 00:17:00.554 "driver_specific": {} 00:17:00.554 } 00:17:00.554 ] 00:17:00.812 04:54:14 -- common/autotest_common.sh@895 -- # return 0 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.812 04:54:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.812 04:54:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.812 "name": "Existed_Raid", 00:17:00.812 "uuid": "8ee66201-43a8-4779-9273-8932f8fb534e", 00:17:00.812 "strip_size_kb": 64, 00:17:00.812 "state": "online", 00:17:00.812 "raid_level": "concat", 00:17:00.812 "superblock": true, 00:17:00.812 "num_base_bdevs": 4, 00:17:00.812 "num_base_bdevs_discovered": 4, 00:17:00.812 "num_base_bdevs_operational": 4, 00:17:00.812 "base_bdevs_list": [ 00:17:00.812 { 00:17:00.812 "name": "BaseBdev1", 00:17:00.812 "uuid": "678cb4ca-3a3c-4d35-9bd1-d5d68488b1d7", 00:17:00.812 "is_configured": true, 00:17:00.812 "data_offset": 2048, 00:17:00.812 "data_size": 63488 00:17:00.812 }, 00:17:00.812 { 00:17:00.812 "name": "BaseBdev2", 00:17:00.812 "uuid": "2935a21d-b099-4cbb-98f7-dba0da41cd27", 00:17:00.812 "is_configured": true, 00:17:00.812 "data_offset": 2048, 00:17:00.812 "data_size": 63488 00:17:00.812 }, 00:17:00.812 { 00:17:00.812 "name": "BaseBdev3", 00:17:00.812 "uuid": "98c8efd0-e35c-419c-a493-2ecc4d216b4a", 00:17:00.812 "is_configured": true, 00:17:00.812 "data_offset": 2048, 00:17:00.812 "data_size": 63488 00:17:00.812 }, 00:17:00.812 { 00:17:00.812 "name": "BaseBdev4", 00:17:00.812 "uuid": "04c79981-76ce-4fde-b206-d4e2f8df2514", 00:17:00.812 "is_configured": true, 00:17:00.812 "data_offset": 2048, 00:17:00.812 "data_size": 63488 00:17:00.812 } 00:17:00.812 ] 00:17:00.812 }' 00:17:00.812 04:54:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.812 04:54:15 -- common/autotest_common.sh@10 -- # set +x 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:01.748 [2024-05-15 04:54:15.806145] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:01.748 [2024-05-15 04:54:15.806176] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.748 [2024-05-15 04:54:15.806219] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.748 04:54:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.006 04:54:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:02.007 "name": "Existed_Raid", 00:17:02.007 "uuid": "8ee66201-43a8-4779-9273-8932f8fb534e", 00:17:02.007 "strip_size_kb": 64, 00:17:02.007 "state": "offline", 00:17:02.007 "raid_level": "concat", 00:17:02.007 "superblock": true, 00:17:02.007 "num_base_bdevs": 4, 00:17:02.007 "num_base_bdevs_discovered": 3, 00:17:02.007 "num_base_bdevs_operational": 3, 00:17:02.007 "base_bdevs_list": [ 00:17:02.007 { 00:17:02.007 "name": null, 00:17:02.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.007 "is_configured": false, 00:17:02.007 "data_offset": 2048, 00:17:02.007 "data_size": 63488 00:17:02.007 }, 00:17:02.007 { 00:17:02.007 "name": "BaseBdev2", 00:17:02.007 "uuid": "2935a21d-b099-4cbb-98f7-dba0da41cd27", 00:17:02.007 "is_configured": true, 00:17:02.007 "data_offset": 2048, 00:17:02.007 "data_size": 63488 00:17:02.007 }, 00:17:02.007 { 00:17:02.007 "name": "BaseBdev3", 00:17:02.007 "uuid": "98c8efd0-e35c-419c-a493-2ecc4d216b4a", 00:17:02.007 "is_configured": true, 00:17:02.007 "data_offset": 2048, 00:17:02.007 "data_size": 63488 00:17:02.007 }, 00:17:02.007 { 00:17:02.007 "name": "BaseBdev4", 00:17:02.007 "uuid": "04c79981-76ce-4fde-b206-d4e2f8df2514", 00:17:02.007 "is_configured": true, 00:17:02.007 "data_offset": 2048, 00:17:02.007 "data_size": 63488 00:17:02.007 } 00:17:02.007 ] 00:17:02.007 }' 00:17:02.007 04:54:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:02.007 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:17:02.573 04:54:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:02.573 04:54:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:02.573 04:54:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:02.573 04:54:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.831 04:54:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:02.831 04:54:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:02.831 04:54:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:03.089 [2024-05-15 04:54:17.066169] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:03.089 04:54:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:03.089 04:54:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:03.089 04:54:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.089 04:54:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:03.347 04:54:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:03.347 04:54:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.347 04:54:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:03.347 [2024-05-15 04:54:17.525128] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:03.605 04:54:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:03.605 04:54:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:03.605 04:54:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.605 04:54:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:03.605 04:54:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:03.605 04:54:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.605 04:54:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:03.863 [2024-05-15 04:54:17.967234] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:03.863 [2024-05-15 04:54:17.967279] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029180 name Existed_Raid, state offline 00:17:03.863 04:54:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:03.863 04:54:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:03.863 04:54:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.863 04:54:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:04.122 04:54:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:04.122 04:54:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:04.122 04:54:18 -- bdev/bdev_raid.sh@287 -- # killprocess 54466 00:17:04.122 04:54:18 -- common/autotest_common.sh@926 -- # '[' -z 54466 ']' 00:17:04.122 04:54:18 -- common/autotest_common.sh@930 -- # kill -0 54466 00:17:04.122 04:54:18 -- common/autotest_common.sh@931 -- # uname 00:17:04.122 04:54:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:04.122 04:54:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54466 00:17:04.122 killing process with pid 54466 00:17:04.122 04:54:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:04.122 04:54:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:04.122 04:54:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54466' 00:17:04.122 04:54:18 -- common/autotest_common.sh@945 -- # kill 54466 00:17:04.122 04:54:18 -- common/autotest_common.sh@950 -- # wait 54466 00:17:04.122 [2024-05-15 04:54:18.255940] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.122 [2024-05-15 04:54:18.256091] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.497 ************************************ 00:17:05.497 END TEST raid_state_function_test_sb 00:17:05.497 ************************************ 00:17:05.497 04:54:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:05.497 00:17:05.497 real 0m14.194s 00:17:05.497 user 0m23.887s 00:17:05.497 sys 0m1.887s 00:17:05.497 04:54:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.497 04:54:19 -- common/autotest_common.sh@10 -- # set +x 00:17:05.497 04:54:19 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:17:05.497 04:54:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:05.497 04:54:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:05.497 04:54:19 -- common/autotest_common.sh@10 -- # set +x 00:17:05.755 ************************************ 00:17:05.755 START TEST raid_superblock_test 00:17:05.755 ************************************ 00:17:05.755 04:54:19 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:05.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@357 -- # raid_pid=54913 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@358 -- # waitforlisten 54913 /var/tmp/spdk-raid.sock 00:17:05.755 04:54:19 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:05.755 04:54:19 -- common/autotest_common.sh@819 -- # '[' -z 54913 ']' 00:17:05.755 04:54:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:05.755 04:54:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:05.755 04:54:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:05.755 04:54:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:05.755 04:54:19 -- common/autotest_common.sh@10 -- # set +x 00:17:05.755 [2024-05-15 04:54:19.887320] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:05.755 [2024-05-15 04:54:19.887559] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54913 ] 00:17:06.013 [2024-05-15 04:54:20.070696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.271 [2024-05-15 04:54:20.339540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.530 [2024-05-15 04:54:20.603609] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.466 04:54:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:07.466 04:54:21 -- common/autotest_common.sh@852 -- # return 0 00:17:07.466 04:54:21 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:07.466 04:54:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:07.466 04:54:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:07.466 04:54:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:07.466 04:54:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:07.466 04:54:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.466 04:54:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.466 04:54:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.466 04:54:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:07.466 malloc1 00:17:07.466 04:54:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.725 [2024-05-15 04:54:21.761398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.725 [2024-05-15 04:54:21.761484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.725 [2024-05-15 04:54:21.761556] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:17:07.725 [2024-05-15 04:54:21.761594] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.725 [2024-05-15 04:54:21.763284] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.725 [2024-05-15 04:54:21.763325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.725 pt1 00:17:07.725 04:54:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:07.725 04:54:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:07.725 04:54:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:07.725 04:54:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:07.725 04:54:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:07.725 04:54:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.725 04:54:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.725 04:54:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.725 04:54:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:07.982 malloc2 00:17:07.982 04:54:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.982 [2024-05-15 04:54:22.177643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.982 [2024-05-15 04:54:22.177866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.982 [2024-05-15 04:54:22.177944] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:17:07.982 [2024-05-15 04:54:22.177984] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.982 [2024-05-15 04:54:22.179521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.982 [2024-05-15 04:54:22.179557] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.982 pt2 00:17:07.982 04:54:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:07.982 04:54:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:07.982 04:54:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:07.982 04:54:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:07.982 04:54:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:07.983 04:54:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.983 04:54:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.983 04:54:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.983 04:54:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:08.240 malloc3 00:17:08.240 04:54:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:08.498 [2024-05-15 04:54:22.497924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:08.498 [2024-05-15 04:54:22.497996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.498 [2024-05-15 04:54:22.498043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:17:08.498 [2024-05-15 04:54:22.498074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.498 [2024-05-15 04:54:22.499487] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.498 [2024-05-15 04:54:22.499525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:08.498 pt3 00:17:08.498 04:54:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:08.498 04:54:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:08.498 04:54:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:08.498 04:54:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:08.498 04:54:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:08.498 04:54:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:08.498 04:54:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:08.498 04:54:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:08.498 04:54:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:08.756 malloc4 00:17:08.756 04:54:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:08.756 [2024-05-15 04:54:22.951127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:08.756 [2024-05-15 04:54:22.951205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.756 [2024-05-15 04:54:22.951265] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ca80 00:17:08.756 [2024-05-15 04:54:22.951323] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.756 [2024-05-15 04:54:22.953033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.756 [2024-05-15 04:54:22.953074] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:08.756 pt4 00:17:08.756 04:54:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:08.756 04:54:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:08.756 04:54:22 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:09.028 [2024-05-15 04:54:23.151236] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:09.028 [2024-05-15 04:54:23.152655] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.028 [2024-05-15 04:54:23.152702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:09.028 [2024-05-15 04:54:23.152761] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:09.028 [2024-05-15 04:54:23.152886] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002df80 00:17:09.028 [2024-05-15 04:54:23.152899] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:09.028 [2024-05-15 04:54:23.152996] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:09.028 [2024-05-15 04:54:23.153184] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002df80 00:17:09.028 [2024-05-15 04:54:23.153194] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002df80 00:17:09.028 [2024-05-15 04:54:23.153292] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.028 04:54:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.293 04:54:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.293 "name": "raid_bdev1", 00:17:09.293 "uuid": "62ac0686-82ec-4c0e-97f2-91207f6981bb", 00:17:09.293 "strip_size_kb": 64, 00:17:09.293 "state": "online", 00:17:09.293 "raid_level": "concat", 00:17:09.293 "superblock": true, 00:17:09.293 "num_base_bdevs": 4, 00:17:09.293 "num_base_bdevs_discovered": 4, 00:17:09.293 "num_base_bdevs_operational": 4, 00:17:09.293 "base_bdevs_list": [ 00:17:09.293 { 00:17:09.293 "name": "pt1", 00:17:09.293 "uuid": "d06773d8-c4b5-54d4-a8e9-496919013767", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 2048, 00:17:09.293 "data_size": 63488 00:17:09.293 }, 00:17:09.293 { 00:17:09.293 "name": "pt2", 00:17:09.293 "uuid": "1af08806-9a20-55c4-be19-c444ec83837b", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 2048, 00:17:09.293 "data_size": 63488 00:17:09.293 }, 00:17:09.293 { 00:17:09.293 "name": "pt3", 00:17:09.293 "uuid": "5c771444-7e53-5d56-8518-2841b732e155", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 2048, 00:17:09.293 "data_size": 63488 00:17:09.293 }, 00:17:09.293 { 00:17:09.293 "name": "pt4", 00:17:09.293 "uuid": "8df9275d-cfa5-5071-9db5-61e14528ddcb", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 2048, 00:17:09.293 "data_size": 63488 00:17:09.293 } 00:17:09.293 ] 00:17:09.293 }' 00:17:09.293 04:54:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.293 04:54:23 -- common/autotest_common.sh@10 -- # set +x 00:17:09.859 04:54:23 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:09.859 04:54:23 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:09.859 [2024-05-15 04:54:24.083352] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.117 04:54:24 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=62ac0686-82ec-4c0e-97f2-91207f6981bb 00:17:10.117 04:54:24 -- bdev/bdev_raid.sh@380 -- # '[' -z 62ac0686-82ec-4c0e-97f2-91207f6981bb ']' 00:17:10.117 04:54:24 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:10.117 [2024-05-15 04:54:24.235271] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.117 [2024-05-15 04:54:24.235301] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.117 [2024-05-15 04:54:24.235373] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.117 [2024-05-15 04:54:24.235420] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.117 [2024-05-15 04:54:24.235429] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002df80 name raid_bdev1, state offline 00:17:10.117 04:54:24 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.117 04:54:24 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:10.375 04:54:24 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:10.375 04:54:24 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:10.375 04:54:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:10.375 04:54:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:10.375 04:54:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:10.375 04:54:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:10.633 04:54:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:10.633 04:54:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:10.891 04:54:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:10.891 04:54:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:10.891 04:54:25 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:10.891 04:54:25 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:11.150 04:54:25 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:11.150 04:54:25 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:11.150 04:54:25 -- common/autotest_common.sh@640 -- # local es=0 00:17:11.150 04:54:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:11.150 04:54:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.150 04:54:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.150 04:54:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.150 04:54:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.150 04:54:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.150 04:54:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.150 04:54:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.150 04:54:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:11.150 04:54:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:11.421 [2024-05-15 04:54:25.471340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:11.421 [2024-05-15 04:54:25.472675] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:11.421 [2024-05-15 04:54:25.472709] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:11.421 [2024-05-15 04:54:25.472756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:11.421 [2024-05-15 04:54:25.472789] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:11.421 [2024-05-15 04:54:25.472865] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:11.421 [2024-05-15 04:54:25.472894] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:11.421 [2024-05-15 04:54:25.472938] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:11.421 [2024-05-15 04:54:25.472961] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.421 [2024-05-15 04:54:25.472971] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002e580 name raid_bdev1, state configuring 00:17:11.421 request: 00:17:11.421 { 00:17:11.421 "name": "raid_bdev1", 00:17:11.421 "raid_level": "concat", 00:17:11.421 "base_bdevs": [ 00:17:11.421 "malloc1", 00:17:11.421 "malloc2", 00:17:11.421 "malloc3", 00:17:11.421 "malloc4" 00:17:11.421 ], 00:17:11.421 "superblock": false, 00:17:11.421 "strip_size_kb": 64, 00:17:11.421 "method": "bdev_raid_create", 00:17:11.421 "req_id": 1 00:17:11.421 } 00:17:11.421 Got JSON-RPC error response 00:17:11.421 response: 00:17:11.421 { 00:17:11.421 "code": -17, 00:17:11.421 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:11.421 } 00:17:11.421 04:54:25 -- common/autotest_common.sh@643 -- # es=1 00:17:11.421 04:54:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:11.421 04:54:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:11.421 04:54:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:11.421 04:54:25 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.421 04:54:25 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:11.693 [2024-05-15 04:54:25.839364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:11.693 [2024-05-15 04:54:25.839424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.693 [2024-05-15 04:54:25.839483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:17:11.693 [2024-05-15 04:54:25.839509] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.693 [2024-05-15 04:54:25.841036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.693 [2024-05-15 04:54:25.841096] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:11.693 [2024-05-15 04:54:25.841182] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:11.693 [2024-05-15 04:54:25.841240] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:11.693 pt1 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.693 04:54:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.950 04:54:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.950 "name": "raid_bdev1", 00:17:11.950 "uuid": "62ac0686-82ec-4c0e-97f2-91207f6981bb", 00:17:11.950 "strip_size_kb": 64, 00:17:11.950 "state": "configuring", 00:17:11.950 "raid_level": "concat", 00:17:11.950 "superblock": true, 00:17:11.950 "num_base_bdevs": 4, 00:17:11.950 "num_base_bdevs_discovered": 1, 00:17:11.950 "num_base_bdevs_operational": 4, 00:17:11.950 "base_bdevs_list": [ 00:17:11.950 { 00:17:11.950 "name": "pt1", 00:17:11.950 "uuid": "d06773d8-c4b5-54d4-a8e9-496919013767", 00:17:11.950 "is_configured": true, 00:17:11.950 "data_offset": 2048, 00:17:11.950 "data_size": 63488 00:17:11.950 }, 00:17:11.950 { 00:17:11.950 "name": null, 00:17:11.950 "uuid": "1af08806-9a20-55c4-be19-c444ec83837b", 00:17:11.950 "is_configured": false, 00:17:11.950 "data_offset": 2048, 00:17:11.950 "data_size": 63488 00:17:11.950 }, 00:17:11.950 { 00:17:11.950 "name": null, 00:17:11.950 "uuid": "5c771444-7e53-5d56-8518-2841b732e155", 00:17:11.950 "is_configured": false, 00:17:11.950 "data_offset": 2048, 00:17:11.950 "data_size": 63488 00:17:11.950 }, 00:17:11.950 { 00:17:11.950 "name": null, 00:17:11.950 "uuid": "8df9275d-cfa5-5071-9db5-61e14528ddcb", 00:17:11.950 "is_configured": false, 00:17:11.950 "data_offset": 2048, 00:17:11.950 "data_size": 63488 00:17:11.950 } 00:17:11.950 ] 00:17:11.950 }' 00:17:11.950 04:54:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.950 04:54:26 -- common/autotest_common.sh@10 -- # set +x 00:17:12.516 04:54:26 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:12.516 04:54:26 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:12.775 [2024-05-15 04:54:26.819468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:12.775 [2024-05-15 04:54:26.819532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.775 [2024-05-15 04:54:26.819594] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031880 00:17:12.775 [2024-05-15 04:54:26.819614] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.775 [2024-05-15 04:54:26.820108] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.775 [2024-05-15 04:54:26.820151] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:12.775 [2024-05-15 04:54:26.820239] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:12.775 [2024-05-15 04:54:26.820260] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:12.775 pt2 00:17:12.775 04:54:26 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:13.034 [2024-05-15 04:54:27.043491] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.034 "name": "raid_bdev1", 00:17:13.034 "uuid": "62ac0686-82ec-4c0e-97f2-91207f6981bb", 00:17:13.034 "strip_size_kb": 64, 00:17:13.034 "state": "configuring", 00:17:13.034 "raid_level": "concat", 00:17:13.034 "superblock": true, 00:17:13.034 "num_base_bdevs": 4, 00:17:13.034 "num_base_bdevs_discovered": 1, 00:17:13.034 "num_base_bdevs_operational": 4, 00:17:13.034 "base_bdevs_list": [ 00:17:13.034 { 00:17:13.034 "name": "pt1", 00:17:13.034 "uuid": "d06773d8-c4b5-54d4-a8e9-496919013767", 00:17:13.034 "is_configured": true, 00:17:13.034 "data_offset": 2048, 00:17:13.034 "data_size": 63488 00:17:13.034 }, 00:17:13.034 { 00:17:13.034 "name": null, 00:17:13.034 "uuid": "1af08806-9a20-55c4-be19-c444ec83837b", 00:17:13.034 "is_configured": false, 00:17:13.034 "data_offset": 2048, 00:17:13.034 "data_size": 63488 00:17:13.034 }, 00:17:13.034 { 00:17:13.034 "name": null, 00:17:13.034 "uuid": "5c771444-7e53-5d56-8518-2841b732e155", 00:17:13.034 "is_configured": false, 00:17:13.034 "data_offset": 2048, 00:17:13.034 "data_size": 63488 00:17:13.034 }, 00:17:13.034 { 00:17:13.034 "name": null, 00:17:13.034 "uuid": "8df9275d-cfa5-5071-9db5-61e14528ddcb", 00:17:13.034 "is_configured": false, 00:17:13.034 "data_offset": 2048, 00:17:13.034 "data_size": 63488 00:17:13.034 } 00:17:13.034 ] 00:17:13.034 }' 00:17:13.034 04:54:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.034 04:54:27 -- common/autotest_common.sh@10 -- # set +x 00:17:13.969 04:54:27 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:13.969 04:54:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:13.969 04:54:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.969 [2024-05-15 04:54:28.027592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.969 [2024-05-15 04:54:28.027659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.969 [2024-05-15 04:54:28.027704] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032d80 00:17:13.969 [2024-05-15 04:54:28.027900] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.969 [2024-05-15 04:54:28.028244] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.969 [2024-05-15 04:54:28.028279] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.969 [2024-05-15 04:54:28.028363] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:13.969 [2024-05-15 04:54:28.028382] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.969 pt2 00:17:13.969 04:54:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:13.969 04:54:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:13.969 04:54:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:13.969 [2024-05-15 04:54:28.163591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:13.969 [2024-05-15 04:54:28.163637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.969 [2024-05-15 04:54:28.163683] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034280 00:17:13.969 [2024-05-15 04:54:28.163706] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.969 [2024-05-15 04:54:28.164117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.969 [2024-05-15 04:54:28.164161] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:13.969 [2024-05-15 04:54:28.164230] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:13.969 [2024-05-15 04:54:28.164248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:13.969 pt3 00:17:13.969 04:54:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:13.969 04:54:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:13.969 04:54:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:14.228 [2024-05-15 04:54:28.299598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:14.228 [2024-05-15 04:54:28.299650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.228 [2024-05-15 04:54:28.299678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035780 00:17:14.228 [2024-05-15 04:54:28.299698] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.228 [2024-05-15 04:54:28.300114] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.228 [2024-05-15 04:54:28.300152] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:14.228 [2024-05-15 04:54:28.300210] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:14.228 [2024-05-15 04:54:28.300226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:14.228 [2024-05-15 04:54:28.300295] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000031280 00:17:14.228 [2024-05-15 04:54:28.300304] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:14.228 [2024-05-15 04:54:28.300364] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:14.228 [2024-05-15 04:54:28.300527] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000031280 00:17:14.228 [2024-05-15 04:54:28.300537] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000031280 00:17:14.228 [2024-05-15 04:54:28.300622] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.228 pt4 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.228 04:54:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.486 04:54:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.486 "name": "raid_bdev1", 00:17:14.486 "uuid": "62ac0686-82ec-4c0e-97f2-91207f6981bb", 00:17:14.486 "strip_size_kb": 64, 00:17:14.486 "state": "online", 00:17:14.486 "raid_level": "concat", 00:17:14.486 "superblock": true, 00:17:14.486 "num_base_bdevs": 4, 00:17:14.486 "num_base_bdevs_discovered": 4, 00:17:14.486 "num_base_bdevs_operational": 4, 00:17:14.486 "base_bdevs_list": [ 00:17:14.486 { 00:17:14.486 "name": "pt1", 00:17:14.486 "uuid": "d06773d8-c4b5-54d4-a8e9-496919013767", 00:17:14.486 "is_configured": true, 00:17:14.486 "data_offset": 2048, 00:17:14.486 "data_size": 63488 00:17:14.486 }, 00:17:14.486 { 00:17:14.486 "name": "pt2", 00:17:14.486 "uuid": "1af08806-9a20-55c4-be19-c444ec83837b", 00:17:14.486 "is_configured": true, 00:17:14.486 "data_offset": 2048, 00:17:14.486 "data_size": 63488 00:17:14.486 }, 00:17:14.486 { 00:17:14.486 "name": "pt3", 00:17:14.486 "uuid": "5c771444-7e53-5d56-8518-2841b732e155", 00:17:14.486 "is_configured": true, 00:17:14.486 "data_offset": 2048, 00:17:14.486 "data_size": 63488 00:17:14.486 }, 00:17:14.486 { 00:17:14.486 "name": "pt4", 00:17:14.486 "uuid": "8df9275d-cfa5-5071-9db5-61e14528ddcb", 00:17:14.486 "is_configured": true, 00:17:14.486 "data_offset": 2048, 00:17:14.486 "data_size": 63488 00:17:14.486 } 00:17:14.486 ] 00:17:14.486 }' 00:17:14.486 04:54:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.486 04:54:28 -- common/autotest_common.sh@10 -- # set +x 00:17:15.052 04:54:29 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:15.052 04:54:29 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:15.309 [2024-05-15 04:54:29.319823] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.309 04:54:29 -- bdev/bdev_raid.sh@430 -- # '[' 62ac0686-82ec-4c0e-97f2-91207f6981bb '!=' 62ac0686-82ec-4c0e-97f2-91207f6981bb ']' 00:17:15.309 04:54:29 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:15.309 04:54:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:15.309 04:54:29 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:15.309 04:54:29 -- bdev/bdev_raid.sh@511 -- # killprocess 54913 00:17:15.309 04:54:29 -- common/autotest_common.sh@926 -- # '[' -z 54913 ']' 00:17:15.309 04:54:29 -- common/autotest_common.sh@930 -- # kill -0 54913 00:17:15.309 04:54:29 -- common/autotest_common.sh@931 -- # uname 00:17:15.309 04:54:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.309 04:54:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54913 00:17:15.309 04:54:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:15.309 killing process with pid 54913 00:17:15.309 04:54:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:15.309 04:54:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54913' 00:17:15.309 04:54:29 -- common/autotest_common.sh@945 -- # kill 54913 00:17:15.309 04:54:29 -- common/autotest_common.sh@950 -- # wait 54913 00:17:15.309 [2024-05-15 04:54:29.373047] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.309 [2024-05-15 04:54:29.373111] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.309 [2024-05-15 04:54:29.373156] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.309 [2024-05-15 04:54:29.373164] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000031280 name raid_bdev1, state offline 00:17:15.566 [2024-05-15 04:54:29.757031] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:17.466 00:17:17.466 real 0m11.457s 00:17:17.466 user 0m18.683s 00:17:17.466 sys 0m1.476s 00:17:17.466 04:54:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.466 04:54:31 -- common/autotest_common.sh@10 -- # set +x 00:17:17.466 ************************************ 00:17:17.466 END TEST raid_superblock_test 00:17:17.466 ************************************ 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:17.466 04:54:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:17.466 04:54:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:17.466 04:54:31 -- common/autotest_common.sh@10 -- # set +x 00:17:17.466 ************************************ 00:17:17.466 START TEST raid_state_function_test 00:17:17.466 ************************************ 00:17:17.466 04:54:31 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.466 Process raid pid: 55238 00:17:17.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=55238 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 55238' 00:17:17.466 04:54:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 55238 /var/tmp/spdk-raid.sock 00:17:17.466 04:54:31 -- common/autotest_common.sh@819 -- # '[' -z 55238 ']' 00:17:17.466 04:54:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:17.466 04:54:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.466 04:54:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:17.466 04:54:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.466 04:54:31 -- common/autotest_common.sh@10 -- # set +x 00:17:17.466 [2024-05-15 04:54:31.412633] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:17.466 [2024-05-15 04:54:31.412893] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.466 [2024-05-15 04:54:31.590592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.725 [2024-05-15 04:54:31.861962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.983 [2024-05-15 04:54:32.124487] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.919 04:54:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.919 04:54:32 -- common/autotest_common.sh@852 -- # return 0 00:17:18.919 04:54:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:18.919 [2024-05-15 04:54:33.021556] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.919 [2024-05-15 04:54:33.021625] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.919 [2024-05-15 04:54:33.021636] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.919 [2024-05-15 04:54:33.021655] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.919 [2024-05-15 04:54:33.021662] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.919 [2024-05-15 04:54:33.021708] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.919 [2024-05-15 04:54:33.021874] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:18.919 [2024-05-15 04:54:33.021913] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.919 04:54:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.178 04:54:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.178 "name": "Existed_Raid", 00:17:19.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.178 "strip_size_kb": 0, 00:17:19.178 "state": "configuring", 00:17:19.178 "raid_level": "raid1", 00:17:19.178 "superblock": false, 00:17:19.178 "num_base_bdevs": 4, 00:17:19.178 "num_base_bdevs_discovered": 0, 00:17:19.178 "num_base_bdevs_operational": 4, 00:17:19.178 "base_bdevs_list": [ 00:17:19.178 { 00:17:19.178 "name": "BaseBdev1", 00:17:19.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.178 "is_configured": false, 00:17:19.178 "data_offset": 0, 00:17:19.178 "data_size": 0 00:17:19.178 }, 00:17:19.178 { 00:17:19.178 "name": "BaseBdev2", 00:17:19.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.178 "is_configured": false, 00:17:19.178 "data_offset": 0, 00:17:19.178 "data_size": 0 00:17:19.178 }, 00:17:19.178 { 00:17:19.178 "name": "BaseBdev3", 00:17:19.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.178 "is_configured": false, 00:17:19.178 "data_offset": 0, 00:17:19.178 "data_size": 0 00:17:19.178 }, 00:17:19.178 { 00:17:19.178 "name": "BaseBdev4", 00:17:19.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.178 "is_configured": false, 00:17:19.178 "data_offset": 0, 00:17:19.178 "data_size": 0 00:17:19.178 } 00:17:19.178 ] 00:17:19.178 }' 00:17:19.178 04:54:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.178 04:54:33 -- common/autotest_common.sh@10 -- # set +x 00:17:19.746 04:54:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:19.746 [2024-05-15 04:54:33.957632] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.746 [2024-05-15 04:54:33.957678] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:17:19.746 04:54:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:20.004 [2024-05-15 04:54:34.101630] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.004 [2024-05-15 04:54:34.101680] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.004 [2024-05-15 04:54:34.101689] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.004 [2024-05-15 04:54:34.101721] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.005 [2024-05-15 04:54:34.101886] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.005 [2024-05-15 04:54:34.101920] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.005 [2024-05-15 04:54:34.101928] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:20.005 [2024-05-15 04:54:34.101952] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:20.005 04:54:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:20.263 BaseBdev1 00:17:20.263 [2024-05-15 04:54:34.287742] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.263 04:54:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:20.263 04:54:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:20.263 04:54:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:20.263 04:54:34 -- common/autotest_common.sh@889 -- # local i 00:17:20.263 04:54:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:20.263 04:54:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:20.263 04:54:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.263 04:54:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:20.521 [ 00:17:20.521 { 00:17:20.521 "name": "BaseBdev1", 00:17:20.521 "aliases": [ 00:17:20.521 "bb0be67e-3065-495f-b25c-17ec11b30a44" 00:17:20.521 ], 00:17:20.521 "product_name": "Malloc disk", 00:17:20.521 "block_size": 512, 00:17:20.521 "num_blocks": 65536, 00:17:20.521 "uuid": "bb0be67e-3065-495f-b25c-17ec11b30a44", 00:17:20.521 "assigned_rate_limits": { 00:17:20.521 "rw_ios_per_sec": 0, 00:17:20.521 "rw_mbytes_per_sec": 0, 00:17:20.521 "r_mbytes_per_sec": 0, 00:17:20.521 "w_mbytes_per_sec": 0 00:17:20.521 }, 00:17:20.521 "claimed": true, 00:17:20.521 "claim_type": "exclusive_write", 00:17:20.521 "zoned": false, 00:17:20.521 "supported_io_types": { 00:17:20.521 "read": true, 00:17:20.521 "write": true, 00:17:20.521 "unmap": true, 00:17:20.521 "write_zeroes": true, 00:17:20.521 "flush": true, 00:17:20.521 "reset": true, 00:17:20.522 "compare": false, 00:17:20.522 "compare_and_write": false, 00:17:20.522 "abort": true, 00:17:20.522 "nvme_admin": false, 00:17:20.522 "nvme_io": false 00:17:20.522 }, 00:17:20.522 "memory_domains": [ 00:17:20.522 { 00:17:20.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.522 "dma_device_type": 2 00:17:20.522 } 00:17:20.522 ], 00:17:20.522 "driver_specific": {} 00:17:20.522 } 00:17:20.522 ] 00:17:20.522 04:54:34 -- common/autotest_common.sh@895 -- # return 0 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.522 04:54:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.780 04:54:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.780 "name": "Existed_Raid", 00:17:20.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.780 "strip_size_kb": 0, 00:17:20.780 "state": "configuring", 00:17:20.780 "raid_level": "raid1", 00:17:20.780 "superblock": false, 00:17:20.780 "num_base_bdevs": 4, 00:17:20.780 "num_base_bdevs_discovered": 1, 00:17:20.780 "num_base_bdevs_operational": 4, 00:17:20.780 "base_bdevs_list": [ 00:17:20.780 { 00:17:20.780 "name": "BaseBdev1", 00:17:20.780 "uuid": "bb0be67e-3065-495f-b25c-17ec11b30a44", 00:17:20.780 "is_configured": true, 00:17:20.780 "data_offset": 0, 00:17:20.780 "data_size": 65536 00:17:20.780 }, 00:17:20.780 { 00:17:20.780 "name": "BaseBdev2", 00:17:20.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.780 "is_configured": false, 00:17:20.780 "data_offset": 0, 00:17:20.780 "data_size": 0 00:17:20.780 }, 00:17:20.780 { 00:17:20.780 "name": "BaseBdev3", 00:17:20.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.780 "is_configured": false, 00:17:20.780 "data_offset": 0, 00:17:20.780 "data_size": 0 00:17:20.780 }, 00:17:20.780 { 00:17:20.780 "name": "BaseBdev4", 00:17:20.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.781 "is_configured": false, 00:17:20.781 "data_offset": 0, 00:17:20.781 "data_size": 0 00:17:20.781 } 00:17:20.781 ] 00:17:20.781 }' 00:17:20.781 04:54:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.781 04:54:34 -- common/autotest_common.sh@10 -- # set +x 00:17:21.348 04:54:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:21.348 [2024-05-15 04:54:35.563845] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:21.348 [2024-05-15 04:54:35.563887] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:17:21.348 04:54:35 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:21.607 [2024-05-15 04:54:35.711922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.607 [2024-05-15 04:54:35.713424] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:21.607 [2024-05-15 04:54:35.713495] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:21.607 [2024-05-15 04:54:35.713515] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:21.607 [2024-05-15 04:54:35.713537] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:21.607 [2024-05-15 04:54:35.713545] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:21.607 [2024-05-15 04:54:35.713561] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.607 04:54:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.865 04:54:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.865 "name": "Existed_Raid", 00:17:21.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.865 "strip_size_kb": 0, 00:17:21.865 "state": "configuring", 00:17:21.865 "raid_level": "raid1", 00:17:21.865 "superblock": false, 00:17:21.865 "num_base_bdevs": 4, 00:17:21.865 "num_base_bdevs_discovered": 1, 00:17:21.865 "num_base_bdevs_operational": 4, 00:17:21.865 "base_bdevs_list": [ 00:17:21.865 { 00:17:21.865 "name": "BaseBdev1", 00:17:21.865 "uuid": "bb0be67e-3065-495f-b25c-17ec11b30a44", 00:17:21.865 "is_configured": true, 00:17:21.865 "data_offset": 0, 00:17:21.865 "data_size": 65536 00:17:21.865 }, 00:17:21.865 { 00:17:21.865 "name": "BaseBdev2", 00:17:21.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.866 "is_configured": false, 00:17:21.866 "data_offset": 0, 00:17:21.866 "data_size": 0 00:17:21.866 }, 00:17:21.866 { 00:17:21.866 "name": "BaseBdev3", 00:17:21.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.866 "is_configured": false, 00:17:21.866 "data_offset": 0, 00:17:21.866 "data_size": 0 00:17:21.866 }, 00:17:21.866 { 00:17:21.866 "name": "BaseBdev4", 00:17:21.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.866 "is_configured": false, 00:17:21.866 "data_offset": 0, 00:17:21.866 "data_size": 0 00:17:21.866 } 00:17:21.866 ] 00:17:21.866 }' 00:17:21.866 04:54:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.866 04:54:35 -- common/autotest_common.sh@10 -- # set +x 00:17:22.433 04:54:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:22.693 BaseBdev2 00:17:22.693 [2024-05-15 04:54:36.686124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.693 04:54:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:22.693 04:54:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:22.693 04:54:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:22.693 04:54:36 -- common/autotest_common.sh@889 -- # local i 00:17:22.693 04:54:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:22.693 04:54:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:22.693 04:54:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:22.693 04:54:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:22.970 [ 00:17:22.970 { 00:17:22.970 "name": "BaseBdev2", 00:17:22.970 "aliases": [ 00:17:22.970 "5e65af66-1315-4e8e-8237-385840bcc624" 00:17:22.970 ], 00:17:22.970 "product_name": "Malloc disk", 00:17:22.970 "block_size": 512, 00:17:22.970 "num_blocks": 65536, 00:17:22.970 "uuid": "5e65af66-1315-4e8e-8237-385840bcc624", 00:17:22.970 "assigned_rate_limits": { 00:17:22.970 "rw_ios_per_sec": 0, 00:17:22.970 "rw_mbytes_per_sec": 0, 00:17:22.970 "r_mbytes_per_sec": 0, 00:17:22.970 "w_mbytes_per_sec": 0 00:17:22.970 }, 00:17:22.970 "claimed": true, 00:17:22.970 "claim_type": "exclusive_write", 00:17:22.970 "zoned": false, 00:17:22.970 "supported_io_types": { 00:17:22.970 "read": true, 00:17:22.970 "write": true, 00:17:22.970 "unmap": true, 00:17:22.970 "write_zeroes": true, 00:17:22.970 "flush": true, 00:17:22.970 "reset": true, 00:17:22.970 "compare": false, 00:17:22.970 "compare_and_write": false, 00:17:22.970 "abort": true, 00:17:22.970 "nvme_admin": false, 00:17:22.970 "nvme_io": false 00:17:22.970 }, 00:17:22.970 "memory_domains": [ 00:17:22.970 { 00:17:22.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.970 "dma_device_type": 2 00:17:22.970 } 00:17:22.970 ], 00:17:22.970 "driver_specific": {} 00:17:22.970 } 00:17:22.970 ] 00:17:22.970 04:54:36 -- common/autotest_common.sh@895 -- # return 0 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.970 04:54:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.970 04:54:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.970 "name": "Existed_Raid", 00:17:22.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.970 "strip_size_kb": 0, 00:17:22.970 "state": "configuring", 00:17:22.970 "raid_level": "raid1", 00:17:22.970 "superblock": false, 00:17:22.970 "num_base_bdevs": 4, 00:17:22.970 "num_base_bdevs_discovered": 2, 00:17:22.970 "num_base_bdevs_operational": 4, 00:17:22.970 "base_bdevs_list": [ 00:17:22.970 { 00:17:22.970 "name": "BaseBdev1", 00:17:22.970 "uuid": "bb0be67e-3065-495f-b25c-17ec11b30a44", 00:17:22.970 "is_configured": true, 00:17:22.970 "data_offset": 0, 00:17:22.970 "data_size": 65536 00:17:22.970 }, 00:17:22.970 { 00:17:22.970 "name": "BaseBdev2", 00:17:22.970 "uuid": "5e65af66-1315-4e8e-8237-385840bcc624", 00:17:22.970 "is_configured": true, 00:17:22.970 "data_offset": 0, 00:17:22.970 "data_size": 65536 00:17:22.970 }, 00:17:22.970 { 00:17:22.970 "name": "BaseBdev3", 00:17:22.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.970 "is_configured": false, 00:17:22.970 "data_offset": 0, 00:17:22.970 "data_size": 0 00:17:22.970 }, 00:17:22.970 { 00:17:22.970 "name": "BaseBdev4", 00:17:22.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.970 "is_configured": false, 00:17:22.970 "data_offset": 0, 00:17:22.970 "data_size": 0 00:17:22.970 } 00:17:22.970 ] 00:17:22.970 }' 00:17:22.970 04:54:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.970 04:54:37 -- common/autotest_common.sh@10 -- # set +x 00:17:23.535 04:54:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:23.792 BaseBdev3 00:17:23.792 [2024-05-15 04:54:37.926975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.792 04:54:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:23.792 04:54:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:23.792 04:54:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:23.792 04:54:37 -- common/autotest_common.sh@889 -- # local i 00:17:23.792 04:54:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:23.792 04:54:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:23.792 04:54:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.050 04:54:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:24.050 [ 00:17:24.050 { 00:17:24.050 "name": "BaseBdev3", 00:17:24.050 "aliases": [ 00:17:24.050 "5d367a42-e6a0-40c4-ac23-da096a621469" 00:17:24.050 ], 00:17:24.050 "product_name": "Malloc disk", 00:17:24.050 "block_size": 512, 00:17:24.050 "num_blocks": 65536, 00:17:24.050 "uuid": "5d367a42-e6a0-40c4-ac23-da096a621469", 00:17:24.050 "assigned_rate_limits": { 00:17:24.050 "rw_ios_per_sec": 0, 00:17:24.050 "rw_mbytes_per_sec": 0, 00:17:24.050 "r_mbytes_per_sec": 0, 00:17:24.050 "w_mbytes_per_sec": 0 00:17:24.050 }, 00:17:24.050 "claimed": true, 00:17:24.050 "claim_type": "exclusive_write", 00:17:24.050 "zoned": false, 00:17:24.050 "supported_io_types": { 00:17:24.050 "read": true, 00:17:24.050 "write": true, 00:17:24.050 "unmap": true, 00:17:24.050 "write_zeroes": true, 00:17:24.050 "flush": true, 00:17:24.050 "reset": true, 00:17:24.050 "compare": false, 00:17:24.050 "compare_and_write": false, 00:17:24.050 "abort": true, 00:17:24.050 "nvme_admin": false, 00:17:24.050 "nvme_io": false 00:17:24.050 }, 00:17:24.050 "memory_domains": [ 00:17:24.050 { 00:17:24.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.050 "dma_device_type": 2 00:17:24.050 } 00:17:24.050 ], 00:17:24.050 "driver_specific": {} 00:17:24.050 } 00:17:24.050 ] 00:17:24.050 04:54:38 -- common/autotest_common.sh@895 -- # return 0 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.050 04:54:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.375 04:54:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.375 "name": "Existed_Raid", 00:17:24.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.375 "strip_size_kb": 0, 00:17:24.375 "state": "configuring", 00:17:24.375 "raid_level": "raid1", 00:17:24.375 "superblock": false, 00:17:24.375 "num_base_bdevs": 4, 00:17:24.375 "num_base_bdevs_discovered": 3, 00:17:24.375 "num_base_bdevs_operational": 4, 00:17:24.375 "base_bdevs_list": [ 00:17:24.375 { 00:17:24.375 "name": "BaseBdev1", 00:17:24.375 "uuid": "bb0be67e-3065-495f-b25c-17ec11b30a44", 00:17:24.375 "is_configured": true, 00:17:24.375 "data_offset": 0, 00:17:24.375 "data_size": 65536 00:17:24.375 }, 00:17:24.375 { 00:17:24.375 "name": "BaseBdev2", 00:17:24.375 "uuid": "5e65af66-1315-4e8e-8237-385840bcc624", 00:17:24.375 "is_configured": true, 00:17:24.375 "data_offset": 0, 00:17:24.375 "data_size": 65536 00:17:24.375 }, 00:17:24.375 { 00:17:24.375 "name": "BaseBdev3", 00:17:24.375 "uuid": "5d367a42-e6a0-40c4-ac23-da096a621469", 00:17:24.375 "is_configured": true, 00:17:24.375 "data_offset": 0, 00:17:24.375 "data_size": 65536 00:17:24.375 }, 00:17:24.375 { 00:17:24.375 "name": "BaseBdev4", 00:17:24.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.375 "is_configured": false, 00:17:24.375 "data_offset": 0, 00:17:24.375 "data_size": 0 00:17:24.375 } 00:17:24.375 ] 00:17:24.375 }' 00:17:24.375 04:54:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.375 04:54:38 -- common/autotest_common.sh@10 -- # set +x 00:17:24.942 04:54:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:25.200 [2024-05-15 04:54:39.245431] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:25.200 [2024-05-15 04:54:39.245491] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:17:25.200 [2024-05-15 04:54:39.245500] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:25.200 [2024-05-15 04:54:39.245586] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:25.200 BaseBdev4 00:17:25.200 [2024-05-15 04:54:39.246272] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:17:25.200 [2024-05-15 04:54:39.246350] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:17:25.200 [2024-05-15 04:54:39.246839] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.200 04:54:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:25.200 04:54:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:25.200 04:54:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:25.200 04:54:39 -- common/autotest_common.sh@889 -- # local i 00:17:25.200 04:54:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:25.200 04:54:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:25.200 04:54:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.200 04:54:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:25.458 [ 00:17:25.458 { 00:17:25.458 "name": "BaseBdev4", 00:17:25.458 "aliases": [ 00:17:25.458 "857c7c8c-ec8d-4f2d-a07e-3dc322841a72" 00:17:25.458 ], 00:17:25.458 "product_name": "Malloc disk", 00:17:25.458 "block_size": 512, 00:17:25.458 "num_blocks": 65536, 00:17:25.458 "uuid": "857c7c8c-ec8d-4f2d-a07e-3dc322841a72", 00:17:25.458 "assigned_rate_limits": { 00:17:25.458 "rw_ios_per_sec": 0, 00:17:25.458 "rw_mbytes_per_sec": 0, 00:17:25.458 "r_mbytes_per_sec": 0, 00:17:25.458 "w_mbytes_per_sec": 0 00:17:25.458 }, 00:17:25.458 "claimed": true, 00:17:25.458 "claim_type": "exclusive_write", 00:17:25.458 "zoned": false, 00:17:25.458 "supported_io_types": { 00:17:25.458 "read": true, 00:17:25.458 "write": true, 00:17:25.458 "unmap": true, 00:17:25.458 "write_zeroes": true, 00:17:25.458 "flush": true, 00:17:25.458 "reset": true, 00:17:25.458 "compare": false, 00:17:25.458 "compare_and_write": false, 00:17:25.458 "abort": true, 00:17:25.458 "nvme_admin": false, 00:17:25.458 "nvme_io": false 00:17:25.458 }, 00:17:25.458 "memory_domains": [ 00:17:25.458 { 00:17:25.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.458 "dma_device_type": 2 00:17:25.458 } 00:17:25.458 ], 00:17:25.458 "driver_specific": {} 00:17:25.458 } 00:17:25.458 ] 00:17:25.458 04:54:39 -- common/autotest_common.sh@895 -- # return 0 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.458 04:54:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.716 04:54:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.716 "name": "Existed_Raid", 00:17:25.716 "uuid": "b2aa0800-63ab-4d96-a97c-c54a82133e7a", 00:17:25.716 "strip_size_kb": 0, 00:17:25.716 "state": "online", 00:17:25.716 "raid_level": "raid1", 00:17:25.716 "superblock": false, 00:17:25.716 "num_base_bdevs": 4, 00:17:25.716 "num_base_bdevs_discovered": 4, 00:17:25.716 "num_base_bdevs_operational": 4, 00:17:25.716 "base_bdevs_list": [ 00:17:25.716 { 00:17:25.716 "name": "BaseBdev1", 00:17:25.716 "uuid": "bb0be67e-3065-495f-b25c-17ec11b30a44", 00:17:25.716 "is_configured": true, 00:17:25.716 "data_offset": 0, 00:17:25.716 "data_size": 65536 00:17:25.716 }, 00:17:25.716 { 00:17:25.716 "name": "BaseBdev2", 00:17:25.716 "uuid": "5e65af66-1315-4e8e-8237-385840bcc624", 00:17:25.716 "is_configured": true, 00:17:25.716 "data_offset": 0, 00:17:25.716 "data_size": 65536 00:17:25.716 }, 00:17:25.716 { 00:17:25.716 "name": "BaseBdev3", 00:17:25.716 "uuid": "5d367a42-e6a0-40c4-ac23-da096a621469", 00:17:25.716 "is_configured": true, 00:17:25.716 "data_offset": 0, 00:17:25.716 "data_size": 65536 00:17:25.716 }, 00:17:25.716 { 00:17:25.716 "name": "BaseBdev4", 00:17:25.716 "uuid": "857c7c8c-ec8d-4f2d-a07e-3dc322841a72", 00:17:25.716 "is_configured": true, 00:17:25.716 "data_offset": 0, 00:17:25.716 "data_size": 65536 00:17:25.716 } 00:17:25.716 ] 00:17:25.716 }' 00:17:25.716 04:54:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.716 04:54:39 -- common/autotest_common.sh@10 -- # set +x 00:17:26.282 04:54:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:26.540 [2024-05-15 04:54:40.589647] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.540 04:54:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.798 04:54:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.798 "name": "Existed_Raid", 00:17:26.798 "uuid": "b2aa0800-63ab-4d96-a97c-c54a82133e7a", 00:17:26.798 "strip_size_kb": 0, 00:17:26.798 "state": "online", 00:17:26.798 "raid_level": "raid1", 00:17:26.798 "superblock": false, 00:17:26.798 "num_base_bdevs": 4, 00:17:26.798 "num_base_bdevs_discovered": 3, 00:17:26.798 "num_base_bdevs_operational": 3, 00:17:26.798 "base_bdevs_list": [ 00:17:26.798 { 00:17:26.798 "name": null, 00:17:26.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.798 "is_configured": false, 00:17:26.798 "data_offset": 0, 00:17:26.798 "data_size": 65536 00:17:26.798 }, 00:17:26.798 { 00:17:26.798 "name": "BaseBdev2", 00:17:26.798 "uuid": "5e65af66-1315-4e8e-8237-385840bcc624", 00:17:26.798 "is_configured": true, 00:17:26.798 "data_offset": 0, 00:17:26.798 "data_size": 65536 00:17:26.798 }, 00:17:26.798 { 00:17:26.798 "name": "BaseBdev3", 00:17:26.798 "uuid": "5d367a42-e6a0-40c4-ac23-da096a621469", 00:17:26.798 "is_configured": true, 00:17:26.798 "data_offset": 0, 00:17:26.798 "data_size": 65536 00:17:26.798 }, 00:17:26.798 { 00:17:26.798 "name": "BaseBdev4", 00:17:26.798 "uuid": "857c7c8c-ec8d-4f2d-a07e-3dc322841a72", 00:17:26.799 "is_configured": true, 00:17:26.799 "data_offset": 0, 00:17:26.799 "data_size": 65536 00:17:26.799 } 00:17:26.799 ] 00:17:26.799 }' 00:17:26.799 04:54:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.799 04:54:40 -- common/autotest_common.sh@10 -- # set +x 00:17:27.365 04:54:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:27.365 04:54:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.365 04:54:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.365 04:54:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.365 04:54:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.365 04:54:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.365 04:54:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:27.623 [2024-05-15 04:54:41.679781] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.623 04:54:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:27.623 04:54:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.623 04:54:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.623 04:54:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.881 04:54:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.881 04:54:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.882 04:54:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:28.140 [2024-05-15 04:54:42.147532] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:28.140 04:54:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:28.140 04:54:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:28.140 04:54:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.140 04:54:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:28.399 04:54:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:28.399 04:54:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.399 04:54:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:28.657 [2024-05-15 04:54:42.690206] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:28.657 [2024-05-15 04:54:42.690235] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.657 [2024-05-15 04:54:42.690277] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.657 [2024-05-15 04:54:42.786145] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.657 [2024-05-15 04:54:42.786182] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:17:28.657 04:54:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:28.657 04:54:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:28.657 04:54:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.657 04:54:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.915 04:54:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:28.915 04:54:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:28.915 04:54:42 -- bdev/bdev_raid.sh@287 -- # killprocess 55238 00:17:28.915 04:54:42 -- common/autotest_common.sh@926 -- # '[' -z 55238 ']' 00:17:28.915 04:54:42 -- common/autotest_common.sh@930 -- # kill -0 55238 00:17:28.915 04:54:42 -- common/autotest_common.sh@931 -- # uname 00:17:28.915 04:54:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.915 04:54:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55238 00:17:28.915 killing process with pid 55238 00:17:28.915 04:54:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:28.915 04:54:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:28.915 04:54:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55238' 00:17:28.915 04:54:42 -- common/autotest_common.sh@945 -- # kill 55238 00:17:28.915 04:54:42 -- common/autotest_common.sh@950 -- # wait 55238 00:17:28.915 [2024-05-15 04:54:42.984360] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.915 [2024-05-15 04:54:42.984483] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.292 ************************************ 00:17:30.292 END TEST raid_state_function_test 00:17:30.292 ************************************ 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:30.292 00:17:30.292 real 0m13.168s 00:17:30.292 user 0m22.068s 00:17:30.292 sys 0m1.753s 00:17:30.292 04:54:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.292 04:54:44 -- common/autotest_common.sh@10 -- # set +x 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:30.292 04:54:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:30.292 04:54:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:30.292 04:54:44 -- common/autotest_common.sh@10 -- # set +x 00:17:30.292 ************************************ 00:17:30.292 START TEST raid_state_function_test_sb 00:17:30.292 ************************************ 00:17:30.292 04:54:44 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:30.292 Process raid pid: 55667 00:17:30.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=55667 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 55667' 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 55667 /var/tmp/spdk-raid.sock 00:17:30.292 04:54:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:30.292 04:54:44 -- common/autotest_common.sh@819 -- # '[' -z 55667 ']' 00:17:30.292 04:54:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:30.292 04:54:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.292 04:54:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:30.292 04:54:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.292 04:54:44 -- common/autotest_common.sh@10 -- # set +x 00:17:30.549 [2024-05-15 04:54:44.637877] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:30.549 [2024-05-15 04:54:44.638097] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.806 [2024-05-15 04:54:44.813818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.063 [2024-05-15 04:54:45.076117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.321 [2024-05-15 04:54:45.342813] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.887 04:54:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.887 04:54:46 -- common/autotest_common.sh@852 -- # return 0 00:17:31.887 04:54:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:32.146 [2024-05-15 04:54:46.204951] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.146 [2024-05-15 04:54:46.205026] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.146 [2024-05-15 04:54:46.205037] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.146 [2024-05-15 04:54:46.205055] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.146 [2024-05-15 04:54:46.205062] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:32.146 [2024-05-15 04:54:46.205106] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:32.146 [2024-05-15 04:54:46.205114] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:32.146 [2024-05-15 04:54:46.205135] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:32.146 04:54:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:32.146 04:54:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.146 04:54:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:32.146 04:54:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:32.146 04:54:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:32.146 04:54:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:32.146 04:54:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.146 04:54:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.147 04:54:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.147 04:54:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.147 04:54:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.147 04:54:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.147 04:54:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.147 "name": "Existed_Raid", 00:17:32.147 "uuid": "c1b66f54-e565-40a0-863e-5a03d3ffba5e", 00:17:32.147 "strip_size_kb": 0, 00:17:32.147 "state": "configuring", 00:17:32.147 "raid_level": "raid1", 00:17:32.147 "superblock": true, 00:17:32.147 "num_base_bdevs": 4, 00:17:32.147 "num_base_bdevs_discovered": 0, 00:17:32.147 "num_base_bdevs_operational": 4, 00:17:32.147 "base_bdevs_list": [ 00:17:32.147 { 00:17:32.147 "name": "BaseBdev1", 00:17:32.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.147 "is_configured": false, 00:17:32.147 "data_offset": 0, 00:17:32.147 "data_size": 0 00:17:32.147 }, 00:17:32.147 { 00:17:32.147 "name": "BaseBdev2", 00:17:32.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.147 "is_configured": false, 00:17:32.147 "data_offset": 0, 00:17:32.147 "data_size": 0 00:17:32.147 }, 00:17:32.147 { 00:17:32.147 "name": "BaseBdev3", 00:17:32.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.147 "is_configured": false, 00:17:32.147 "data_offset": 0, 00:17:32.147 "data_size": 0 00:17:32.147 }, 00:17:32.147 { 00:17:32.147 "name": "BaseBdev4", 00:17:32.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.147 "is_configured": false, 00:17:32.147 "data_offset": 0, 00:17:32.147 "data_size": 0 00:17:32.147 } 00:17:32.147 ] 00:17:32.147 }' 00:17:32.147 04:54:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.147 04:54:46 -- common/autotest_common.sh@10 -- # set +x 00:17:32.713 04:54:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:32.971 [2024-05-15 04:54:46.984939] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:32.971 [2024-05-15 04:54:46.984981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:17:32.971 04:54:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:32.971 [2024-05-15 04:54:47.201022] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.971 [2024-05-15 04:54:47.201068] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.971 [2024-05-15 04:54:47.201077] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.971 [2024-05-15 04:54:47.201110] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.971 [2024-05-15 04:54:47.201117] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:32.971 [2024-05-15 04:54:47.201140] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:32.971 [2024-05-15 04:54:47.201147] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:32.971 [2024-05-15 04:54:47.201168] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:33.228 04:54:47 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:33.228 [2024-05-15 04:54:47.399533] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.228 BaseBdev1 00:17:33.228 04:54:47 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:33.228 04:54:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:33.228 04:54:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:33.228 04:54:47 -- common/autotest_common.sh@889 -- # local i 00:17:33.228 04:54:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:33.228 04:54:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:33.228 04:54:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.485 04:54:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:33.485 [ 00:17:33.485 { 00:17:33.485 "name": "BaseBdev1", 00:17:33.485 "aliases": [ 00:17:33.485 "98ea9b14-9dff-4eb0-a94f-91f4a9d5f7a0" 00:17:33.485 ], 00:17:33.485 "product_name": "Malloc disk", 00:17:33.485 "block_size": 512, 00:17:33.485 "num_blocks": 65536, 00:17:33.485 "uuid": "98ea9b14-9dff-4eb0-a94f-91f4a9d5f7a0", 00:17:33.485 "assigned_rate_limits": { 00:17:33.485 "rw_ios_per_sec": 0, 00:17:33.485 "rw_mbytes_per_sec": 0, 00:17:33.485 "r_mbytes_per_sec": 0, 00:17:33.485 "w_mbytes_per_sec": 0 00:17:33.485 }, 00:17:33.485 "claimed": true, 00:17:33.485 "claim_type": "exclusive_write", 00:17:33.485 "zoned": false, 00:17:33.485 "supported_io_types": { 00:17:33.485 "read": true, 00:17:33.485 "write": true, 00:17:33.485 "unmap": true, 00:17:33.485 "write_zeroes": true, 00:17:33.485 "flush": true, 00:17:33.485 "reset": true, 00:17:33.485 "compare": false, 00:17:33.485 "compare_and_write": false, 00:17:33.485 "abort": true, 00:17:33.485 "nvme_admin": false, 00:17:33.485 "nvme_io": false 00:17:33.485 }, 00:17:33.485 "memory_domains": [ 00:17:33.485 { 00:17:33.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.485 "dma_device_type": 2 00:17:33.485 } 00:17:33.485 ], 00:17:33.485 "driver_specific": {} 00:17:33.485 } 00:17:33.485 ] 00:17:33.485 04:54:47 -- common/autotest_common.sh@895 -- # return 0 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.485 04:54:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.742 04:54:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.742 "name": "Existed_Raid", 00:17:33.742 "uuid": "4e88067e-ed02-4172-99a4-f3286b1083aa", 00:17:33.742 "strip_size_kb": 0, 00:17:33.742 "state": "configuring", 00:17:33.742 "raid_level": "raid1", 00:17:33.742 "superblock": true, 00:17:33.742 "num_base_bdevs": 4, 00:17:33.742 "num_base_bdevs_discovered": 1, 00:17:33.742 "num_base_bdevs_operational": 4, 00:17:33.742 "base_bdevs_list": [ 00:17:33.742 { 00:17:33.742 "name": "BaseBdev1", 00:17:33.742 "uuid": "98ea9b14-9dff-4eb0-a94f-91f4a9d5f7a0", 00:17:33.742 "is_configured": true, 00:17:33.742 "data_offset": 2048, 00:17:33.742 "data_size": 63488 00:17:33.742 }, 00:17:33.742 { 00:17:33.742 "name": "BaseBdev2", 00:17:33.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.742 "is_configured": false, 00:17:33.742 "data_offset": 0, 00:17:33.742 "data_size": 0 00:17:33.742 }, 00:17:33.742 { 00:17:33.742 "name": "BaseBdev3", 00:17:33.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.742 "is_configured": false, 00:17:33.742 "data_offset": 0, 00:17:33.742 "data_size": 0 00:17:33.742 }, 00:17:33.742 { 00:17:33.742 "name": "BaseBdev4", 00:17:33.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.742 "is_configured": false, 00:17:33.742 "data_offset": 0, 00:17:33.742 "data_size": 0 00:17:33.742 } 00:17:33.742 ] 00:17:33.742 }' 00:17:33.742 04:54:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.742 04:54:47 -- common/autotest_common.sh@10 -- # set +x 00:17:34.307 04:54:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:34.565 [2024-05-15 04:54:48.583644] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.565 [2024-05-15 04:54:48.583688] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:17:34.565 04:54:48 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:34.565 04:54:48 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:34.823 04:54:48 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.081 BaseBdev1 00:17:35.081 04:54:49 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:35.081 04:54:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:35.081 04:54:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:35.081 04:54:49 -- common/autotest_common.sh@889 -- # local i 00:17:35.081 04:54:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:35.081 04:54:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:35.081 04:54:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.081 04:54:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.339 [ 00:17:35.339 { 00:17:35.339 "name": "BaseBdev1", 00:17:35.339 "aliases": [ 00:17:35.339 "de29bcde-7412-4688-bf07-4d00ee800cc0" 00:17:35.339 ], 00:17:35.339 "product_name": "Malloc disk", 00:17:35.339 "block_size": 512, 00:17:35.339 "num_blocks": 65536, 00:17:35.339 "uuid": "de29bcde-7412-4688-bf07-4d00ee800cc0", 00:17:35.339 "assigned_rate_limits": { 00:17:35.339 "rw_ios_per_sec": 0, 00:17:35.339 "rw_mbytes_per_sec": 0, 00:17:35.339 "r_mbytes_per_sec": 0, 00:17:35.339 "w_mbytes_per_sec": 0 00:17:35.339 }, 00:17:35.339 "claimed": false, 00:17:35.339 "zoned": false, 00:17:35.339 "supported_io_types": { 00:17:35.339 "read": true, 00:17:35.339 "write": true, 00:17:35.339 "unmap": true, 00:17:35.339 "write_zeroes": true, 00:17:35.339 "flush": true, 00:17:35.339 "reset": true, 00:17:35.339 "compare": false, 00:17:35.339 "compare_and_write": false, 00:17:35.339 "abort": true, 00:17:35.339 "nvme_admin": false, 00:17:35.339 "nvme_io": false 00:17:35.339 }, 00:17:35.339 "memory_domains": [ 00:17:35.339 { 00:17:35.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.339 "dma_device_type": 2 00:17:35.339 } 00:17:35.339 ], 00:17:35.339 "driver_specific": {} 00:17:35.339 } 00:17:35.339 ] 00:17:35.339 04:54:49 -- common/autotest_common.sh@895 -- # return 0 00:17:35.339 04:54:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:35.597 [2024-05-15 04:54:49.581418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.597 [2024-05-15 04:54:49.583041] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.597 [2024-05-15 04:54:49.583141] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.597 [2024-05-15 04:54:49.583159] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.597 [2024-05-15 04:54:49.583191] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.597 [2024-05-15 04:54:49.583204] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:35.598 [2024-05-15 04:54:49.583227] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.598 "name": "Existed_Raid", 00:17:35.598 "uuid": "91fdb9e5-dbde-49f0-b7ab-2bd0720d52a9", 00:17:35.598 "strip_size_kb": 0, 00:17:35.598 "state": "configuring", 00:17:35.598 "raid_level": "raid1", 00:17:35.598 "superblock": true, 00:17:35.598 "num_base_bdevs": 4, 00:17:35.598 "num_base_bdevs_discovered": 1, 00:17:35.598 "num_base_bdevs_operational": 4, 00:17:35.598 "base_bdevs_list": [ 00:17:35.598 { 00:17:35.598 "name": "BaseBdev1", 00:17:35.598 "uuid": "de29bcde-7412-4688-bf07-4d00ee800cc0", 00:17:35.598 "is_configured": true, 00:17:35.598 "data_offset": 2048, 00:17:35.598 "data_size": 63488 00:17:35.598 }, 00:17:35.598 { 00:17:35.598 "name": "BaseBdev2", 00:17:35.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.598 "is_configured": false, 00:17:35.598 "data_offset": 0, 00:17:35.598 "data_size": 0 00:17:35.598 }, 00:17:35.598 { 00:17:35.598 "name": "BaseBdev3", 00:17:35.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.598 "is_configured": false, 00:17:35.598 "data_offset": 0, 00:17:35.598 "data_size": 0 00:17:35.598 }, 00:17:35.598 { 00:17:35.598 "name": "BaseBdev4", 00:17:35.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.598 "is_configured": false, 00:17:35.598 "data_offset": 0, 00:17:35.598 "data_size": 0 00:17:35.598 } 00:17:35.598 ] 00:17:35.598 }' 00:17:35.598 04:54:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.598 04:54:49 -- common/autotest_common.sh@10 -- # set +x 00:17:36.164 04:54:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:36.422 [2024-05-15 04:54:50.554770] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.422 BaseBdev2 00:17:36.422 04:54:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:36.422 04:54:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:36.422 04:54:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:36.422 04:54:50 -- common/autotest_common.sh@889 -- # local i 00:17:36.422 04:54:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:36.422 04:54:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:36.422 04:54:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:36.681 04:54:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.681 [ 00:17:36.681 { 00:17:36.681 "name": "BaseBdev2", 00:17:36.681 "aliases": [ 00:17:36.681 "8cdc22f4-633f-41a7-9e35-5edd8c604a80" 00:17:36.681 ], 00:17:36.681 "product_name": "Malloc disk", 00:17:36.681 "block_size": 512, 00:17:36.681 "num_blocks": 65536, 00:17:36.681 "uuid": "8cdc22f4-633f-41a7-9e35-5edd8c604a80", 00:17:36.681 "assigned_rate_limits": { 00:17:36.681 "rw_ios_per_sec": 0, 00:17:36.681 "rw_mbytes_per_sec": 0, 00:17:36.681 "r_mbytes_per_sec": 0, 00:17:36.681 "w_mbytes_per_sec": 0 00:17:36.681 }, 00:17:36.681 "claimed": true, 00:17:36.681 "claim_type": "exclusive_write", 00:17:36.681 "zoned": false, 00:17:36.681 "supported_io_types": { 00:17:36.681 "read": true, 00:17:36.681 "write": true, 00:17:36.681 "unmap": true, 00:17:36.681 "write_zeroes": true, 00:17:36.681 "flush": true, 00:17:36.681 "reset": true, 00:17:36.681 "compare": false, 00:17:36.681 "compare_and_write": false, 00:17:36.681 "abort": true, 00:17:36.681 "nvme_admin": false, 00:17:36.681 "nvme_io": false 00:17:36.681 }, 00:17:36.681 "memory_domains": [ 00:17:36.681 { 00:17:36.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.681 "dma_device_type": 2 00:17:36.681 } 00:17:36.681 ], 00:17:36.681 "driver_specific": {} 00:17:36.681 } 00:17:36.681 ] 00:17:36.681 04:54:50 -- common/autotest_common.sh@895 -- # return 0 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.681 04:54:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.938 04:54:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.938 "name": "Existed_Raid", 00:17:36.938 "uuid": "91fdb9e5-dbde-49f0-b7ab-2bd0720d52a9", 00:17:36.938 "strip_size_kb": 0, 00:17:36.938 "state": "configuring", 00:17:36.938 "raid_level": "raid1", 00:17:36.938 "superblock": true, 00:17:36.938 "num_base_bdevs": 4, 00:17:36.938 "num_base_bdevs_discovered": 2, 00:17:36.938 "num_base_bdevs_operational": 4, 00:17:36.938 "base_bdevs_list": [ 00:17:36.938 { 00:17:36.938 "name": "BaseBdev1", 00:17:36.938 "uuid": "de29bcde-7412-4688-bf07-4d00ee800cc0", 00:17:36.938 "is_configured": true, 00:17:36.938 "data_offset": 2048, 00:17:36.938 "data_size": 63488 00:17:36.938 }, 00:17:36.938 { 00:17:36.938 "name": "BaseBdev2", 00:17:36.938 "uuid": "8cdc22f4-633f-41a7-9e35-5edd8c604a80", 00:17:36.938 "is_configured": true, 00:17:36.938 "data_offset": 2048, 00:17:36.938 "data_size": 63488 00:17:36.938 }, 00:17:36.938 { 00:17:36.938 "name": "BaseBdev3", 00:17:36.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.939 "is_configured": false, 00:17:36.939 "data_offset": 0, 00:17:36.939 "data_size": 0 00:17:36.939 }, 00:17:36.939 { 00:17:36.939 "name": "BaseBdev4", 00:17:36.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.939 "is_configured": false, 00:17:36.939 "data_offset": 0, 00:17:36.939 "data_size": 0 00:17:36.939 } 00:17:36.939 ] 00:17:36.939 }' 00:17:36.939 04:54:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.939 04:54:51 -- common/autotest_common.sh@10 -- # set +x 00:17:37.505 04:54:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:37.505 [2024-05-15 04:54:51.645310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.505 BaseBdev3 00:17:37.505 04:54:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:37.505 04:54:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:37.505 04:54:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:37.505 04:54:51 -- common/autotest_common.sh@889 -- # local i 00:17:37.505 04:54:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:37.505 04:54:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:37.505 04:54:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:37.763 04:54:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:38.022 [ 00:17:38.022 { 00:17:38.022 "name": "BaseBdev3", 00:17:38.022 "aliases": [ 00:17:38.022 "0bf27991-e608-4e73-a657-1ccd370c6b68" 00:17:38.022 ], 00:17:38.022 "product_name": "Malloc disk", 00:17:38.022 "block_size": 512, 00:17:38.022 "num_blocks": 65536, 00:17:38.022 "uuid": "0bf27991-e608-4e73-a657-1ccd370c6b68", 00:17:38.022 "assigned_rate_limits": { 00:17:38.022 "rw_ios_per_sec": 0, 00:17:38.022 "rw_mbytes_per_sec": 0, 00:17:38.022 "r_mbytes_per_sec": 0, 00:17:38.022 "w_mbytes_per_sec": 0 00:17:38.022 }, 00:17:38.022 "claimed": true, 00:17:38.022 "claim_type": "exclusive_write", 00:17:38.022 "zoned": false, 00:17:38.022 "supported_io_types": { 00:17:38.022 "read": true, 00:17:38.022 "write": true, 00:17:38.022 "unmap": true, 00:17:38.022 "write_zeroes": true, 00:17:38.022 "flush": true, 00:17:38.022 "reset": true, 00:17:38.022 "compare": false, 00:17:38.022 "compare_and_write": false, 00:17:38.022 "abort": true, 00:17:38.022 "nvme_admin": false, 00:17:38.022 "nvme_io": false 00:17:38.022 }, 00:17:38.022 "memory_domains": [ 00:17:38.022 { 00:17:38.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.022 "dma_device_type": 2 00:17:38.022 } 00:17:38.022 ], 00:17:38.022 "driver_specific": {} 00:17:38.022 } 00:17:38.022 ] 00:17:38.022 04:54:52 -- common/autotest_common.sh@895 -- # return 0 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.022 04:54:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.022 "name": "Existed_Raid", 00:17:38.022 "uuid": "91fdb9e5-dbde-49f0-b7ab-2bd0720d52a9", 00:17:38.022 "strip_size_kb": 0, 00:17:38.022 "state": "configuring", 00:17:38.022 "raid_level": "raid1", 00:17:38.022 "superblock": true, 00:17:38.022 "num_base_bdevs": 4, 00:17:38.022 "num_base_bdevs_discovered": 3, 00:17:38.022 "num_base_bdevs_operational": 4, 00:17:38.022 "base_bdevs_list": [ 00:17:38.022 { 00:17:38.022 "name": "BaseBdev1", 00:17:38.022 "uuid": "de29bcde-7412-4688-bf07-4d00ee800cc0", 00:17:38.022 "is_configured": true, 00:17:38.022 "data_offset": 2048, 00:17:38.022 "data_size": 63488 00:17:38.022 }, 00:17:38.023 { 00:17:38.023 "name": "BaseBdev2", 00:17:38.023 "uuid": "8cdc22f4-633f-41a7-9e35-5edd8c604a80", 00:17:38.023 "is_configured": true, 00:17:38.023 "data_offset": 2048, 00:17:38.023 "data_size": 63488 00:17:38.023 }, 00:17:38.023 { 00:17:38.023 "name": "BaseBdev3", 00:17:38.023 "uuid": "0bf27991-e608-4e73-a657-1ccd370c6b68", 00:17:38.023 "is_configured": true, 00:17:38.023 "data_offset": 2048, 00:17:38.023 "data_size": 63488 00:17:38.023 }, 00:17:38.023 { 00:17:38.023 "name": "BaseBdev4", 00:17:38.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.023 "is_configured": false, 00:17:38.023 "data_offset": 0, 00:17:38.023 "data_size": 0 00:17:38.023 } 00:17:38.023 ] 00:17:38.023 }' 00:17:38.023 04:54:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.023 04:54:52 -- common/autotest_common.sh@10 -- # set +x 00:17:38.589 04:54:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:38.848 [2024-05-15 04:54:52.894171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:38.848 [2024-05-15 04:54:52.894328] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000029180 00:17:38.848 [2024-05-15 04:54:52.894339] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:38.848 [2024-05-15 04:54:52.894441] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:38.848 [2024-05-15 04:54:52.894663] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000029180 00:17:38.848 [2024-05-15 04:54:52.894672] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000029180 00:17:38.848 BaseBdev4 00:17:38.848 [2024-05-15 04:54:52.895092] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.848 04:54:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:38.848 04:54:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:38.848 04:54:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:38.848 04:54:52 -- common/autotest_common.sh@889 -- # local i 00:17:38.848 04:54:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:38.848 04:54:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:38.848 04:54:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.106 04:54:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:39.363 [ 00:17:39.363 { 00:17:39.363 "name": "BaseBdev4", 00:17:39.363 "aliases": [ 00:17:39.363 "04fc0359-99df-4cf9-a9cb-1e71ae3652ba" 00:17:39.363 ], 00:17:39.363 "product_name": "Malloc disk", 00:17:39.363 "block_size": 512, 00:17:39.363 "num_blocks": 65536, 00:17:39.363 "uuid": "04fc0359-99df-4cf9-a9cb-1e71ae3652ba", 00:17:39.363 "assigned_rate_limits": { 00:17:39.363 "rw_ios_per_sec": 0, 00:17:39.364 "rw_mbytes_per_sec": 0, 00:17:39.364 "r_mbytes_per_sec": 0, 00:17:39.364 "w_mbytes_per_sec": 0 00:17:39.364 }, 00:17:39.364 "claimed": true, 00:17:39.364 "claim_type": "exclusive_write", 00:17:39.364 "zoned": false, 00:17:39.364 "supported_io_types": { 00:17:39.364 "read": true, 00:17:39.364 "write": true, 00:17:39.364 "unmap": true, 00:17:39.364 "write_zeroes": true, 00:17:39.364 "flush": true, 00:17:39.364 "reset": true, 00:17:39.364 "compare": false, 00:17:39.364 "compare_and_write": false, 00:17:39.364 "abort": true, 00:17:39.364 "nvme_admin": false, 00:17:39.364 "nvme_io": false 00:17:39.364 }, 00:17:39.364 "memory_domains": [ 00:17:39.364 { 00:17:39.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.364 "dma_device_type": 2 00:17:39.364 } 00:17:39.364 ], 00:17:39.364 "driver_specific": {} 00:17:39.364 } 00:17:39.364 ] 00:17:39.364 04:54:53 -- common/autotest_common.sh@895 -- # return 0 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.364 "name": "Existed_Raid", 00:17:39.364 "uuid": "91fdb9e5-dbde-49f0-b7ab-2bd0720d52a9", 00:17:39.364 "strip_size_kb": 0, 00:17:39.364 "state": "online", 00:17:39.364 "raid_level": "raid1", 00:17:39.364 "superblock": true, 00:17:39.364 "num_base_bdevs": 4, 00:17:39.364 "num_base_bdevs_discovered": 4, 00:17:39.364 "num_base_bdevs_operational": 4, 00:17:39.364 "base_bdevs_list": [ 00:17:39.364 { 00:17:39.364 "name": "BaseBdev1", 00:17:39.364 "uuid": "de29bcde-7412-4688-bf07-4d00ee800cc0", 00:17:39.364 "is_configured": true, 00:17:39.364 "data_offset": 2048, 00:17:39.364 "data_size": 63488 00:17:39.364 }, 00:17:39.364 { 00:17:39.364 "name": "BaseBdev2", 00:17:39.364 "uuid": "8cdc22f4-633f-41a7-9e35-5edd8c604a80", 00:17:39.364 "is_configured": true, 00:17:39.364 "data_offset": 2048, 00:17:39.364 "data_size": 63488 00:17:39.364 }, 00:17:39.364 { 00:17:39.364 "name": "BaseBdev3", 00:17:39.364 "uuid": "0bf27991-e608-4e73-a657-1ccd370c6b68", 00:17:39.364 "is_configured": true, 00:17:39.364 "data_offset": 2048, 00:17:39.364 "data_size": 63488 00:17:39.364 }, 00:17:39.364 { 00:17:39.364 "name": "BaseBdev4", 00:17:39.364 "uuid": "04fc0359-99df-4cf9-a9cb-1e71ae3652ba", 00:17:39.364 "is_configured": true, 00:17:39.364 "data_offset": 2048, 00:17:39.364 "data_size": 63488 00:17:39.364 } 00:17:39.364 ] 00:17:39.364 }' 00:17:39.364 04:54:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.364 04:54:53 -- common/autotest_common.sh@10 -- # set +x 00:17:39.929 04:54:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:40.188 [2024-05-15 04:54:54.322394] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.446 04:54:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.446 "name": "Existed_Raid", 00:17:40.446 "uuid": "91fdb9e5-dbde-49f0-b7ab-2bd0720d52a9", 00:17:40.446 "strip_size_kb": 0, 00:17:40.446 "state": "online", 00:17:40.446 "raid_level": "raid1", 00:17:40.446 "superblock": true, 00:17:40.446 "num_base_bdevs": 4, 00:17:40.446 "num_base_bdevs_discovered": 3, 00:17:40.446 "num_base_bdevs_operational": 3, 00:17:40.446 "base_bdevs_list": [ 00:17:40.446 { 00:17:40.446 "name": null, 00:17:40.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.446 "is_configured": false, 00:17:40.446 "data_offset": 2048, 00:17:40.446 "data_size": 63488 00:17:40.446 }, 00:17:40.446 { 00:17:40.446 "name": "BaseBdev2", 00:17:40.446 "uuid": "8cdc22f4-633f-41a7-9e35-5edd8c604a80", 00:17:40.446 "is_configured": true, 00:17:40.446 "data_offset": 2048, 00:17:40.446 "data_size": 63488 00:17:40.446 }, 00:17:40.446 { 00:17:40.446 "name": "BaseBdev3", 00:17:40.446 "uuid": "0bf27991-e608-4e73-a657-1ccd370c6b68", 00:17:40.446 "is_configured": true, 00:17:40.446 "data_offset": 2048, 00:17:40.446 "data_size": 63488 00:17:40.446 }, 00:17:40.447 { 00:17:40.447 "name": "BaseBdev4", 00:17:40.447 "uuid": "04fc0359-99df-4cf9-a9cb-1e71ae3652ba", 00:17:40.447 "is_configured": true, 00:17:40.447 "data_offset": 2048, 00:17:40.447 "data_size": 63488 00:17:40.447 } 00:17:40.447 ] 00:17:40.447 }' 00:17:40.447 04:54:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.447 04:54:54 -- common/autotest_common.sh@10 -- # set +x 00:17:41.013 04:54:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:41.013 04:54:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:41.013 04:54:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.013 04:54:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:41.272 04:54:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:41.272 04:54:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:41.272 04:54:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:41.272 [2024-05-15 04:54:55.494969] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:41.530 04:54:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:41.530 04:54:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:41.530 04:54:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:41.530 04:54:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.789 04:54:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:41.789 04:54:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:41.789 04:54:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:41.789 [2024-05-15 04:54:55.965254] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:42.047 04:54:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:42.047 04:54:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:42.047 04:54:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:42.047 04:54:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.047 04:54:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:42.047 04:54:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:42.047 04:54:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:42.305 [2024-05-15 04:54:56.428949] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:42.305 [2024-05-15 04:54:56.428979] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.305 [2024-05-15 04:54:56.429017] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.305 [2024-05-15 04:54:56.527861] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.305 [2024-05-15 04:54:56.527893] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029180 name Existed_Raid, state offline 00:17:42.564 04:54:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:42.564 04:54:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:42.564 04:54:56 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:42.564 04:54:56 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.564 04:54:56 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:42.564 04:54:56 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:42.564 04:54:56 -- bdev/bdev_raid.sh@287 -- # killprocess 55667 00:17:42.564 04:54:56 -- common/autotest_common.sh@926 -- # '[' -z 55667 ']' 00:17:42.564 04:54:56 -- common/autotest_common.sh@930 -- # kill -0 55667 00:17:42.564 04:54:56 -- common/autotest_common.sh@931 -- # uname 00:17:42.564 04:54:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:42.564 04:54:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55667 00:17:42.564 04:54:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:42.564 04:54:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:42.564 killing process with pid 55667 00:17:42.564 04:54:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55667' 00:17:42.564 04:54:56 -- common/autotest_common.sh@945 -- # kill 55667 00:17:42.564 04:54:56 -- common/autotest_common.sh@950 -- # wait 55667 00:17:42.564 [2024-05-15 04:54:56.794088] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.564 [2024-05-15 04:54:56.794208] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:44.466 ************************************ 00:17:44.466 END TEST raid_state_function_test_sb 00:17:44.466 ************************************ 00:17:44.466 00:17:44.466 real 0m13.744s 00:17:44.466 user 0m23.102s 00:17:44.466 sys 0m1.770s 00:17:44.466 04:54:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.466 04:54:58 -- common/autotest_common.sh@10 -- # set +x 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:17:44.466 04:54:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:44.466 04:54:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:44.466 04:54:58 -- common/autotest_common.sh@10 -- # set +x 00:17:44.466 ************************************ 00:17:44.466 START TEST raid_superblock_test 00:17:44.466 ************************************ 00:17:44.466 04:54:58 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:44.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@357 -- # raid_pid=56106 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@358 -- # waitforlisten 56106 /var/tmp/spdk-raid.sock 00:17:44.466 04:54:58 -- common/autotest_common.sh@819 -- # '[' -z 56106 ']' 00:17:44.466 04:54:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:44.466 04:54:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:44.466 04:54:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:44.466 04:54:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:44.466 04:54:58 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:44.466 04:54:58 -- common/autotest_common.sh@10 -- # set +x 00:17:44.466 [2024-05-15 04:54:58.446955] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:44.466 [2024-05-15 04:54:58.447194] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56106 ] 00:17:44.466 [2024-05-15 04:54:58.611684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.724 [2024-05-15 04:54:58.834134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.983 [2024-05-15 04:54:59.101276] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.919 04:54:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:45.919 04:54:59 -- common/autotest_common.sh@852 -- # return 0 00:17:45.919 04:54:59 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:45.919 04:54:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:45.919 04:54:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:45.919 04:54:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:45.919 04:54:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:45.919 04:54:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:45.919 04:54:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:45.919 04:54:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:45.919 04:54:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:46.177 malloc1 00:17:46.177 04:55:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.177 [2024-05-15 04:55:00.323332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.177 [2024-05-15 04:55:00.323413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.177 [2024-05-15 04:55:00.323480] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:17:46.177 [2024-05-15 04:55:00.323521] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.177 [2024-05-15 04:55:00.325100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.177 [2024-05-15 04:55:00.325140] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.177 pt1 00:17:46.177 04:55:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:46.177 04:55:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:46.177 04:55:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:46.177 04:55:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:46.177 04:55:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:46.177 04:55:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.177 04:55:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.177 04:55:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.177 04:55:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:46.435 malloc2 00:17:46.435 04:55:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:46.435 [2024-05-15 04:55:00.641539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:46.435 [2024-05-15 04:55:00.641610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.435 [2024-05-15 04:55:00.641672] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:17:46.435 [2024-05-15 04:55:00.641713] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.435 [2024-05-15 04:55:00.643353] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.436 [2024-05-15 04:55:00.643387] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:46.436 pt2 00:17:46.436 04:55:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:46.436 04:55:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:46.436 04:55:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:46.436 04:55:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:46.436 04:55:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:46.436 04:55:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.436 04:55:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.436 04:55:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.436 04:55:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:46.694 malloc3 00:17:46.953 04:55:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:46.953 [2024-05-15 04:55:01.057251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:46.953 [2024-05-15 04:55:01.057324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.953 [2024-05-15 04:55:01.057388] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:17:46.953 [2024-05-15 04:55:01.057428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.953 [2024-05-15 04:55:01.059240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.953 [2024-05-15 04:55:01.059301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:46.953 pt3 00:17:46.953 04:55:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:46.953 04:55:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:46.953 04:55:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:46.953 04:55:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:46.953 04:55:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:46.953 04:55:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.953 04:55:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.953 04:55:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.953 04:55:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:47.214 malloc4 00:17:47.214 04:55:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:47.214 [2024-05-15 04:55:01.437007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:47.214 [2024-05-15 04:55:01.437077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.214 [2024-05-15 04:55:01.437112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ca80 00:17:47.214 [2024-05-15 04:55:01.437156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.214 pt4 00:17:47.214 [2024-05-15 04:55:01.438605] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.214 [2024-05-15 04:55:01.438644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:47.473 [2024-05-15 04:55:01.573100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:47.473 [2024-05-15 04:55:01.574437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.473 [2024-05-15 04:55:01.574480] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:47.473 [2024-05-15 04:55:01.574505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:47.473 [2024-05-15 04:55:01.574614] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002df80 00:17:47.473 [2024-05-15 04:55:01.574624] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:47.473 [2024-05-15 04:55:01.574760] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:47.473 [2024-05-15 04:55:01.575002] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002df80 00:17:47.473 [2024-05-15 04:55:01.575013] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002df80 00:17:47.473 [2024-05-15 04:55:01.575147] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.473 04:55:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.731 04:55:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.731 "name": "raid_bdev1", 00:17:47.731 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:17:47.731 "strip_size_kb": 0, 00:17:47.731 "state": "online", 00:17:47.731 "raid_level": "raid1", 00:17:47.731 "superblock": true, 00:17:47.731 "num_base_bdevs": 4, 00:17:47.731 "num_base_bdevs_discovered": 4, 00:17:47.731 "num_base_bdevs_operational": 4, 00:17:47.731 "base_bdevs_list": [ 00:17:47.731 { 00:17:47.731 "name": "pt1", 00:17:47.731 "uuid": "efad7951-eb00-5d26-a477-f4732fed4f9d", 00:17:47.731 "is_configured": true, 00:17:47.731 "data_offset": 2048, 00:17:47.731 "data_size": 63488 00:17:47.731 }, 00:17:47.731 { 00:17:47.731 "name": "pt2", 00:17:47.731 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:17:47.731 "is_configured": true, 00:17:47.731 "data_offset": 2048, 00:17:47.731 "data_size": 63488 00:17:47.731 }, 00:17:47.731 { 00:17:47.731 "name": "pt3", 00:17:47.731 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:17:47.731 "is_configured": true, 00:17:47.731 "data_offset": 2048, 00:17:47.731 "data_size": 63488 00:17:47.731 }, 00:17:47.731 { 00:17:47.731 "name": "pt4", 00:17:47.731 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:17:47.731 "is_configured": true, 00:17:47.731 "data_offset": 2048, 00:17:47.731 "data_size": 63488 00:17:47.731 } 00:17:47.731 ] 00:17:47.731 }' 00:17:47.731 04:55:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.731 04:55:01 -- common/autotest_common.sh@10 -- # set +x 00:17:48.304 04:55:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:48.304 04:55:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:48.305 [2024-05-15 04:55:02.497247] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.305 04:55:02 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f5a6e51a-5bb1-4059-bb80-2c134539e419 00:17:48.305 04:55:02 -- bdev/bdev_raid.sh@380 -- # '[' -z f5a6e51a-5bb1-4059-bb80-2c134539e419 ']' 00:17:48.305 04:55:02 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:48.565 [2024-05-15 04:55:02.645146] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.565 [2024-05-15 04:55:02.645171] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.565 [2024-05-15 04:55:02.645237] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.565 [2024-05-15 04:55:02.645292] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.565 [2024-05-15 04:55:02.645302] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002df80 name raid_bdev1, state offline 00:17:48.565 04:55:02 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:48.565 04:55:02 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.823 04:55:02 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:48.823 04:55:02 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:48.823 04:55:02 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.823 04:55:02 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:48.823 04:55:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.823 04:55:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:49.082 04:55:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:49.082 04:55:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:49.399 04:55:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:49.399 04:55:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:49.399 04:55:03 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:49.399 04:55:03 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:49.661 04:55:03 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:49.661 04:55:03 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:49.661 04:55:03 -- common/autotest_common.sh@640 -- # local es=0 00:17:49.661 04:55:03 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:49.661 04:55:03 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.661 04:55:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:49.661 04:55:03 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.661 04:55:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:49.661 04:55:03 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.661 04:55:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:49.661 04:55:03 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.661 04:55:03 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:49.661 04:55:03 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:49.919 [2024-05-15 04:55:03.901228] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:49.919 [2024-05-15 04:55:03.903004] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:49.919 [2024-05-15 04:55:03.903047] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:49.919 [2024-05-15 04:55:03.903068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:49.919 [2024-05-15 04:55:03.903100] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:49.919 [2024-05-15 04:55:03.903166] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:49.919 [2024-05-15 04:55:03.903194] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:49.919 [2024-05-15 04:55:03.903238] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:49.919 [2024-05-15 04:55:03.903264] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.919 [2024-05-15 04:55:03.903275] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002e580 name raid_bdev1, state configuring 00:17:49.919 request: 00:17:49.919 { 00:17:49.919 "name": "raid_bdev1", 00:17:49.919 "raid_level": "raid1", 00:17:49.919 "base_bdevs": [ 00:17:49.919 "malloc1", 00:17:49.919 "malloc2", 00:17:49.919 "malloc3", 00:17:49.919 "malloc4" 00:17:49.919 ], 00:17:49.919 "superblock": false, 00:17:49.919 "method": "bdev_raid_create", 00:17:49.919 "req_id": 1 00:17:49.919 } 00:17:49.919 Got JSON-RPC error response 00:17:49.919 response: 00:17:49.919 { 00:17:49.919 "code": -17, 00:17:49.919 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:49.919 } 00:17:49.919 04:55:03 -- common/autotest_common.sh@643 -- # es=1 00:17:49.919 04:55:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:49.919 04:55:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:49.919 04:55:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:49.919 04:55:03 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:49.919 04:55:03 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.919 04:55:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:49.919 04:55:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:49.919 04:55:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.177 [2024-05-15 04:55:04.257243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.177 [2024-05-15 04:55:04.257314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.177 [2024-05-15 04:55:04.257403] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:17:50.177 [2024-05-15 04:55:04.257434] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.177 [2024-05-15 04:55:04.259124] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.177 [2024-05-15 04:55:04.259176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.177 [2024-05-15 04:55:04.259267] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:50.177 [2024-05-15 04:55:04.259325] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.177 pt1 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.177 04:55:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.436 04:55:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:50.436 "name": "raid_bdev1", 00:17:50.436 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:17:50.436 "strip_size_kb": 0, 00:17:50.436 "state": "configuring", 00:17:50.436 "raid_level": "raid1", 00:17:50.436 "superblock": true, 00:17:50.436 "num_base_bdevs": 4, 00:17:50.436 "num_base_bdevs_discovered": 1, 00:17:50.436 "num_base_bdevs_operational": 4, 00:17:50.436 "base_bdevs_list": [ 00:17:50.436 { 00:17:50.436 "name": "pt1", 00:17:50.436 "uuid": "efad7951-eb00-5d26-a477-f4732fed4f9d", 00:17:50.436 "is_configured": true, 00:17:50.436 "data_offset": 2048, 00:17:50.436 "data_size": 63488 00:17:50.436 }, 00:17:50.436 { 00:17:50.436 "name": null, 00:17:50.436 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:17:50.436 "is_configured": false, 00:17:50.436 "data_offset": 2048, 00:17:50.436 "data_size": 63488 00:17:50.436 }, 00:17:50.436 { 00:17:50.436 "name": null, 00:17:50.436 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:17:50.436 "is_configured": false, 00:17:50.436 "data_offset": 2048, 00:17:50.436 "data_size": 63488 00:17:50.436 }, 00:17:50.436 { 00:17:50.436 "name": null, 00:17:50.436 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:17:50.436 "is_configured": false, 00:17:50.436 "data_offset": 2048, 00:17:50.436 "data_size": 63488 00:17:50.436 } 00:17:50.436 ] 00:17:50.436 }' 00:17:50.436 04:55:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:50.436 04:55:04 -- common/autotest_common.sh@10 -- # set +x 00:17:51.002 04:55:04 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:51.002 04:55:04 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.002 [2024-05-15 04:55:05.185337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.002 [2024-05-15 04:55:05.185399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.002 [2024-05-15 04:55:05.185453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031880 00:17:51.002 [2024-05-15 04:55:05.185475] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.002 [2024-05-15 04:55:05.185998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.002 [2024-05-15 04:55:05.186042] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.002 [2024-05-15 04:55:05.186131] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:51.002 [2024-05-15 04:55:05.186160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.002 pt2 00:17:51.002 04:55:05 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:51.261 [2024-05-15 04:55:05.337357] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.261 04:55:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.519 04:55:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:51.519 "name": "raid_bdev1", 00:17:51.519 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:17:51.519 "strip_size_kb": 0, 00:17:51.519 "state": "configuring", 00:17:51.519 "raid_level": "raid1", 00:17:51.519 "superblock": true, 00:17:51.519 "num_base_bdevs": 4, 00:17:51.519 "num_base_bdevs_discovered": 1, 00:17:51.519 "num_base_bdevs_operational": 4, 00:17:51.519 "base_bdevs_list": [ 00:17:51.519 { 00:17:51.519 "name": "pt1", 00:17:51.519 "uuid": "efad7951-eb00-5d26-a477-f4732fed4f9d", 00:17:51.519 "is_configured": true, 00:17:51.519 "data_offset": 2048, 00:17:51.519 "data_size": 63488 00:17:51.519 }, 00:17:51.519 { 00:17:51.519 "name": null, 00:17:51.519 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:17:51.519 "is_configured": false, 00:17:51.519 "data_offset": 2048, 00:17:51.519 "data_size": 63488 00:17:51.519 }, 00:17:51.519 { 00:17:51.519 "name": null, 00:17:51.519 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:17:51.519 "is_configured": false, 00:17:51.519 "data_offset": 2048, 00:17:51.519 "data_size": 63488 00:17:51.519 }, 00:17:51.519 { 00:17:51.519 "name": null, 00:17:51.519 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:17:51.519 "is_configured": false, 00:17:51.519 "data_offset": 2048, 00:17:51.519 "data_size": 63488 00:17:51.519 } 00:17:51.519 ] 00:17:51.519 }' 00:17:51.519 04:55:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:51.519 04:55:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.086 04:55:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:52.086 04:55:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:52.086 04:55:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:52.345 [2024-05-15 04:55:06.329446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:52.345 [2024-05-15 04:55:06.329510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.345 [2024-05-15 04:55:06.329567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032d80 00:17:52.345 [2024-05-15 04:55:06.329588] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.345 [2024-05-15 04:55:06.330149] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.345 [2024-05-15 04:55:06.330203] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:52.345 [2024-05-15 04:55:06.330294] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:52.345 [2024-05-15 04:55:06.330315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.345 pt2 00:17:52.345 04:55:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:52.345 04:55:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:52.345 04:55:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:52.345 [2024-05-15 04:55:06.465469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:52.345 [2024-05-15 04:55:06.465524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.345 [2024-05-15 04:55:06.465557] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034280 00:17:52.345 [2024-05-15 04:55:06.465583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.345 pt3 00:17:52.345 [2024-05-15 04:55:06.466526] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.345 [2024-05-15 04:55:06.466686] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:52.345 [2024-05-15 04:55:06.466947] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:52.345 [2024-05-15 04:55:06.467004] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:52.345 04:55:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:52.345 04:55:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:52.345 04:55:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:52.604 [2024-05-15 04:55:06.609541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:52.604 [2024-05-15 04:55:06.609640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.604 [2024-05-15 04:55:06.609696] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035780 00:17:52.604 [2024-05-15 04:55:06.609932] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.604 [2024-05-15 04:55:06.610385] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.604 [2024-05-15 04:55:06.610440] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:52.604 [2024-05-15 04:55:06.610561] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:52.604 [2024-05-15 04:55:06.610587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:52.604 [2024-05-15 04:55:06.610701] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000031280 00:17:52.604 [2024-05-15 04:55:06.610734] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:52.604 [2024-05-15 04:55:06.610871] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:52.604 pt4 00:17:52.604 [2024-05-15 04:55:06.611139] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000031280 00:17:52.604 [2024-05-15 04:55:06.611158] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000031280 00:17:52.604 [2024-05-15 04:55:06.611292] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.604 04:55:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.863 04:55:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.863 "name": "raid_bdev1", 00:17:52.863 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:17:52.863 "strip_size_kb": 0, 00:17:52.863 "state": "online", 00:17:52.863 "raid_level": "raid1", 00:17:52.863 "superblock": true, 00:17:52.863 "num_base_bdevs": 4, 00:17:52.863 "num_base_bdevs_discovered": 4, 00:17:52.863 "num_base_bdevs_operational": 4, 00:17:52.863 "base_bdevs_list": [ 00:17:52.863 { 00:17:52.863 "name": "pt1", 00:17:52.863 "uuid": "efad7951-eb00-5d26-a477-f4732fed4f9d", 00:17:52.863 "is_configured": true, 00:17:52.863 "data_offset": 2048, 00:17:52.863 "data_size": 63488 00:17:52.863 }, 00:17:52.863 { 00:17:52.863 "name": "pt2", 00:17:52.863 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:17:52.863 "is_configured": true, 00:17:52.863 "data_offset": 2048, 00:17:52.863 "data_size": 63488 00:17:52.863 }, 00:17:52.863 { 00:17:52.863 "name": "pt3", 00:17:52.863 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:17:52.863 "is_configured": true, 00:17:52.863 "data_offset": 2048, 00:17:52.863 "data_size": 63488 00:17:52.863 }, 00:17:52.863 { 00:17:52.863 "name": "pt4", 00:17:52.863 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:17:52.863 "is_configured": true, 00:17:52.863 "data_offset": 2048, 00:17:52.863 "data_size": 63488 00:17:52.863 } 00:17:52.863 ] 00:17:52.863 }' 00:17:52.863 04:55:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.864 04:55:06 -- common/autotest_common.sh@10 -- # set +x 00:17:53.431 04:55:07 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:53.431 04:55:07 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:53.431 [2024-05-15 04:55:07.541690] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.431 04:55:07 -- bdev/bdev_raid.sh@430 -- # '[' f5a6e51a-5bb1-4059-bb80-2c134539e419 '!=' f5a6e51a-5bb1-4059-bb80-2c134539e419 ']' 00:17:53.431 04:55:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:53.431 04:55:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:53.431 04:55:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:53.431 04:55:07 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:53.689 [2024-05-15 04:55:07.685666] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.689 "name": "raid_bdev1", 00:17:53.689 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:17:53.689 "strip_size_kb": 0, 00:17:53.689 "state": "online", 00:17:53.689 "raid_level": "raid1", 00:17:53.689 "superblock": true, 00:17:53.689 "num_base_bdevs": 4, 00:17:53.689 "num_base_bdevs_discovered": 3, 00:17:53.689 "num_base_bdevs_operational": 3, 00:17:53.689 "base_bdevs_list": [ 00:17:53.689 { 00:17:53.689 "name": null, 00:17:53.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.689 "is_configured": false, 00:17:53.689 "data_offset": 2048, 00:17:53.689 "data_size": 63488 00:17:53.689 }, 00:17:53.689 { 00:17:53.689 "name": "pt2", 00:17:53.689 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:17:53.689 "is_configured": true, 00:17:53.689 "data_offset": 2048, 00:17:53.689 "data_size": 63488 00:17:53.689 }, 00:17:53.689 { 00:17:53.689 "name": "pt3", 00:17:53.689 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:17:53.689 "is_configured": true, 00:17:53.689 "data_offset": 2048, 00:17:53.689 "data_size": 63488 00:17:53.689 }, 00:17:53.689 { 00:17:53.689 "name": "pt4", 00:17:53.689 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:17:53.689 "is_configured": true, 00:17:53.689 "data_offset": 2048, 00:17:53.689 "data_size": 63488 00:17:53.689 } 00:17:53.689 ] 00:17:53.689 }' 00:17:53.689 04:55:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.689 04:55:07 -- common/autotest_common.sh@10 -- # set +x 00:17:54.254 04:55:08 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:54.254 [2024-05-15 04:55:08.465670] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.254 [2024-05-15 04:55:08.465700] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.254 [2024-05-15 04:55:08.465943] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.254 [2024-05-15 04:55:08.466006] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.254 [2024-05-15 04:55:08.466016] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000031280 name raid_bdev1, state offline 00:17:54.254 04:55:08 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:54.254 04:55:08 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.512 04:55:08 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:54.512 04:55:08 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:54.512 04:55:08 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:54.512 04:55:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:54.512 04:55:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:54.769 04:55:08 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:54.769 04:55:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:54.769 04:55:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:55.027 04:55:09 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:55.027 04:55:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:55.027 04:55:09 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:55.027 04:55:09 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:55.027 04:55:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:55.027 04:55:09 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:55.027 04:55:09 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:55.027 04:55:09 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:55.285 [2024-05-15 04:55:09.377789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:55.285 [2024-05-15 04:55:09.377865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.285 [2024-05-15 04:55:09.377910] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000036c80 00:17:55.285 [2024-05-15 04:55:09.377938] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.285 [2024-05-15 04:55:09.379572] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.285 [2024-05-15 04:55:09.379631] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:55.285 [2024-05-15 04:55:09.379733] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:55.285 [2024-05-15 04:55:09.379776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:55.285 pt2 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.285 04:55:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.544 04:55:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.544 "name": "raid_bdev1", 00:17:55.544 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:17:55.544 "strip_size_kb": 0, 00:17:55.544 "state": "configuring", 00:17:55.544 "raid_level": "raid1", 00:17:55.544 "superblock": true, 00:17:55.544 "num_base_bdevs": 4, 00:17:55.544 "num_base_bdevs_discovered": 1, 00:17:55.544 "num_base_bdevs_operational": 3, 00:17:55.544 "base_bdevs_list": [ 00:17:55.544 { 00:17:55.544 "name": null, 00:17:55.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.544 "is_configured": false, 00:17:55.544 "data_offset": 2048, 00:17:55.544 "data_size": 63488 00:17:55.544 }, 00:17:55.544 { 00:17:55.544 "name": "pt2", 00:17:55.544 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:17:55.544 "is_configured": true, 00:17:55.544 "data_offset": 2048, 00:17:55.544 "data_size": 63488 00:17:55.544 }, 00:17:55.544 { 00:17:55.544 "name": null, 00:17:55.544 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:17:55.544 "is_configured": false, 00:17:55.544 "data_offset": 2048, 00:17:55.544 "data_size": 63488 00:17:55.544 }, 00:17:55.544 { 00:17:55.544 "name": null, 00:17:55.544 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:17:55.544 "is_configured": false, 00:17:55.544 "data_offset": 2048, 00:17:55.544 "data_size": 63488 00:17:55.544 } 00:17:55.544 ] 00:17:55.544 }' 00:17:55.544 04:55:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.544 04:55:09 -- common/autotest_common.sh@10 -- # set +x 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:56.111 [2024-05-15 04:55:10.253919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:56.111 [2024-05-15 04:55:10.253998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.111 [2024-05-15 04:55:10.254046] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038780 00:17:56.111 [2024-05-15 04:55:10.254074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.111 [2024-05-15 04:55:10.254412] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.111 [2024-05-15 04:55:10.254440] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:56.111 [2024-05-15 04:55:10.254522] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:56.111 [2024-05-15 04:55:10.254543] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:56.111 pt3 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.111 04:55:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.370 04:55:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.370 "name": "raid_bdev1", 00:17:56.370 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:17:56.370 "strip_size_kb": 0, 00:17:56.370 "state": "configuring", 00:17:56.370 "raid_level": "raid1", 00:17:56.370 "superblock": true, 00:17:56.370 "num_base_bdevs": 4, 00:17:56.370 "num_base_bdevs_discovered": 2, 00:17:56.370 "num_base_bdevs_operational": 3, 00:17:56.370 "base_bdevs_list": [ 00:17:56.370 { 00:17:56.370 "name": null, 00:17:56.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.370 "is_configured": false, 00:17:56.370 "data_offset": 2048, 00:17:56.370 "data_size": 63488 00:17:56.370 }, 00:17:56.370 { 00:17:56.370 "name": "pt2", 00:17:56.370 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:17:56.370 "is_configured": true, 00:17:56.370 "data_offset": 2048, 00:17:56.370 "data_size": 63488 00:17:56.370 }, 00:17:56.370 { 00:17:56.370 "name": "pt3", 00:17:56.370 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:17:56.370 "is_configured": true, 00:17:56.370 "data_offset": 2048, 00:17:56.370 "data_size": 63488 00:17:56.370 }, 00:17:56.370 { 00:17:56.370 "name": null, 00:17:56.370 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:17:56.370 "is_configured": false, 00:17:56.370 "data_offset": 2048, 00:17:56.370 "data_size": 63488 00:17:56.370 } 00:17:56.370 ] 00:17:56.370 }' 00:17:56.370 04:55:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.370 04:55:10 -- common/autotest_common.sh@10 -- # set +x 00:17:56.937 04:55:11 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:56.937 04:55:11 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:56.937 04:55:11 -- bdev/bdev_raid.sh@462 -- # i=3 00:17:56.937 04:55:11 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:57.196 [2024-05-15 04:55:11.258004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:57.196 [2024-05-15 04:55:11.258083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.196 [2024-05-15 04:55:11.258137] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000039c80 00:17:57.196 [2024-05-15 04:55:11.258160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.196 [2024-05-15 04:55:11.258521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.196 [2024-05-15 04:55:11.258566] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:57.196 [2024-05-15 04:55:11.258661] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:57.196 [2024-05-15 04:55:11.258682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:57.196 [2024-05-15 04:55:11.258926] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000038180 00:17:57.196 [2024-05-15 04:55:11.258945] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:57.196 [2024-05-15 04:55:11.259063] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:57.196 [2024-05-15 04:55:11.259288] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000038180 00:17:57.196 [2024-05-15 04:55:11.259300] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000038180 00:17:57.196 [2024-05-15 04:55:11.259416] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.196 pt4 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.196 04:55:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.454 04:55:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.454 "name": "raid_bdev1", 00:17:57.454 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:17:57.454 "strip_size_kb": 0, 00:17:57.454 "state": "online", 00:17:57.454 "raid_level": "raid1", 00:17:57.454 "superblock": true, 00:17:57.454 "num_base_bdevs": 4, 00:17:57.454 "num_base_bdevs_discovered": 3, 00:17:57.454 "num_base_bdevs_operational": 3, 00:17:57.454 "base_bdevs_list": [ 00:17:57.454 { 00:17:57.454 "name": null, 00:17:57.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.454 "is_configured": false, 00:17:57.454 "data_offset": 2048, 00:17:57.454 "data_size": 63488 00:17:57.454 }, 00:17:57.454 { 00:17:57.454 "name": "pt2", 00:17:57.454 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:17:57.454 "is_configured": true, 00:17:57.454 "data_offset": 2048, 00:17:57.454 "data_size": 63488 00:17:57.454 }, 00:17:57.454 { 00:17:57.454 "name": "pt3", 00:17:57.454 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:17:57.454 "is_configured": true, 00:17:57.454 "data_offset": 2048, 00:17:57.454 "data_size": 63488 00:17:57.454 }, 00:17:57.454 { 00:17:57.454 "name": "pt4", 00:17:57.454 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:17:57.454 "is_configured": true, 00:17:57.454 "data_offset": 2048, 00:17:57.454 "data_size": 63488 00:17:57.454 } 00:17:57.454 ] 00:17:57.454 }' 00:17:57.454 04:55:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.454 04:55:11 -- common/autotest_common.sh@10 -- # set +x 00:17:58.020 04:55:11 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:17:58.020 04:55:11 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:58.020 [2024-05-15 04:55:12.118069] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.020 [2024-05-15 04:55:12.118100] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.020 [2024-05-15 04:55:12.118158] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.020 [2024-05-15 04:55:12.118206] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.020 [2024-05-15 04:55:12.118215] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000038180 name raid_bdev1, state offline 00:17:58.020 04:55:12 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.020 04:55:12 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.279 [2024-05-15 04:55:12.482147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.279 [2024-05-15 04:55:12.482245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.279 [2024-05-15 04:55:12.482291] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003b180 00:17:58.279 [2024-05-15 04:55:12.482311] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.279 [2024-05-15 04:55:12.483893] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.279 [2024-05-15 04:55:12.483965] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.279 [2024-05-15 04:55:12.484049] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:58.279 [2024-05-15 04:55:12.484089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.279 pt1 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.279 04:55:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.537 04:55:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.537 "name": "raid_bdev1", 00:17:58.537 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:17:58.537 "strip_size_kb": 0, 00:17:58.537 "state": "configuring", 00:17:58.537 "raid_level": "raid1", 00:17:58.537 "superblock": true, 00:17:58.537 "num_base_bdevs": 4, 00:17:58.537 "num_base_bdevs_discovered": 1, 00:17:58.537 "num_base_bdevs_operational": 4, 00:17:58.538 "base_bdevs_list": [ 00:17:58.538 { 00:17:58.538 "name": "pt1", 00:17:58.538 "uuid": "efad7951-eb00-5d26-a477-f4732fed4f9d", 00:17:58.538 "is_configured": true, 00:17:58.538 "data_offset": 2048, 00:17:58.538 "data_size": 63488 00:17:58.538 }, 00:17:58.538 { 00:17:58.538 "name": null, 00:17:58.538 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:17:58.538 "is_configured": false, 00:17:58.538 "data_offset": 2048, 00:17:58.538 "data_size": 63488 00:17:58.538 }, 00:17:58.538 { 00:17:58.538 "name": null, 00:17:58.538 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:17:58.538 "is_configured": false, 00:17:58.538 "data_offset": 2048, 00:17:58.538 "data_size": 63488 00:17:58.538 }, 00:17:58.538 { 00:17:58.538 "name": null, 00:17:58.538 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:17:58.538 "is_configured": false, 00:17:58.538 "data_offset": 2048, 00:17:58.538 "data_size": 63488 00:17:58.538 } 00:17:58.538 ] 00:17:58.538 }' 00:17:58.538 04:55:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.538 04:55:12 -- common/autotest_common.sh@10 -- # set +x 00:17:59.103 04:55:13 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:59.103 04:55:13 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:59.103 04:55:13 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:59.362 04:55:13 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:59.362 04:55:13 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:59.362 04:55:13 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:59.619 04:55:13 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:59.619 04:55:13 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:59.619 04:55:13 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:59.877 04:55:13 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:59.877 04:55:13 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:59.877 04:55:13 -- bdev/bdev_raid.sh@489 -- # i=3 00:17:59.877 04:55:13 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:59.877 [2024-05-15 04:55:14.106268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:59.877 [2024-05-15 04:55:14.106359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.877 [2024-05-15 04:55:14.106403] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003cc80 00:17:59.877 [2024-05-15 04:55:14.106430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.877 [2024-05-15 04:55:14.106776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.877 [2024-05-15 04:55:14.106823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:59.877 [2024-05-15 04:55:14.106912] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:59.877 [2024-05-15 04:55:14.106924] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:59.877 [2024-05-15 04:55:14.106932] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.877 [2024-05-15 04:55:14.106951] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003c680 name raid_bdev1, state configuring 00:17:59.877 [2024-05-15 04:55:14.107043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:00.136 pt4 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.136 "name": "raid_bdev1", 00:18:00.136 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:18:00.136 "strip_size_kb": 0, 00:18:00.136 "state": "configuring", 00:18:00.136 "raid_level": "raid1", 00:18:00.136 "superblock": true, 00:18:00.136 "num_base_bdevs": 4, 00:18:00.136 "num_base_bdevs_discovered": 1, 00:18:00.136 "num_base_bdevs_operational": 3, 00:18:00.136 "base_bdevs_list": [ 00:18:00.136 { 00:18:00.136 "name": null, 00:18:00.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.136 "is_configured": false, 00:18:00.136 "data_offset": 2048, 00:18:00.136 "data_size": 63488 00:18:00.136 }, 00:18:00.136 { 00:18:00.136 "name": null, 00:18:00.136 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:18:00.136 "is_configured": false, 00:18:00.136 "data_offset": 2048, 00:18:00.136 "data_size": 63488 00:18:00.136 }, 00:18:00.136 { 00:18:00.136 "name": null, 00:18:00.136 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:18:00.136 "is_configured": false, 00:18:00.136 "data_offset": 2048, 00:18:00.136 "data_size": 63488 00:18:00.136 }, 00:18:00.136 { 00:18:00.136 "name": "pt4", 00:18:00.136 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:18:00.136 "is_configured": true, 00:18:00.136 "data_offset": 2048, 00:18:00.136 "data_size": 63488 00:18:00.136 } 00:18:00.136 ] 00:18:00.136 }' 00:18:00.136 04:55:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.136 04:55:14 -- common/autotest_common.sh@10 -- # set +x 00:18:00.702 04:55:14 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:00.702 04:55:14 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:00.702 04:55:14 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.960 [2024-05-15 04:55:14.978384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.960 [2024-05-15 04:55:14.978472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.960 [2024-05-15 04:55:14.978559] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003e480 00:18:00.961 [2024-05-15 04:55:14.978587] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.961 [2024-05-15 04:55:14.978933] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.961 [2024-05-15 04:55:14.978990] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.961 [2024-05-15 04:55:14.979081] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:00.961 [2024-05-15 04:55:14.979102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.961 pt2 00:18:00.961 04:55:14 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:00.961 04:55:14 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:00.961 04:55:14 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:00.961 [2024-05-15 04:55:15.114379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:00.961 [2024-05-15 04:55:15.114433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.961 [2024-05-15 04:55:15.114483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:18:00.961 [2024-05-15 04:55:15.114526] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.961 [2024-05-15 04:55:15.114821] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.961 [2024-05-15 04:55:15.114858] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:00.961 [2024-05-15 04:55:15.114939] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:00.961 [2024-05-15 04:55:15.114975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:00.961 [2024-05-15 04:55:15.115045] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003de80 00:18:00.961 [2024-05-15 04:55:15.115053] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:00.961 [2024-05-15 04:55:15.115121] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:18:00.961 [2024-05-15 04:55:15.115289] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003de80 00:18:00.961 [2024-05-15 04:55:15.115306] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600003de80 00:18:00.961 [2024-05-15 04:55:15.115392] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.961 pt3 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.961 04:55:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.220 04:55:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.220 "name": "raid_bdev1", 00:18:01.220 "uuid": "f5a6e51a-5bb1-4059-bb80-2c134539e419", 00:18:01.220 "strip_size_kb": 0, 00:18:01.220 "state": "online", 00:18:01.220 "raid_level": "raid1", 00:18:01.220 "superblock": true, 00:18:01.220 "num_base_bdevs": 4, 00:18:01.220 "num_base_bdevs_discovered": 3, 00:18:01.220 "num_base_bdevs_operational": 3, 00:18:01.220 "base_bdevs_list": [ 00:18:01.220 { 00:18:01.220 "name": null, 00:18:01.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.220 "is_configured": false, 00:18:01.220 "data_offset": 2048, 00:18:01.220 "data_size": 63488 00:18:01.220 }, 00:18:01.220 { 00:18:01.220 "name": "pt2", 00:18:01.220 "uuid": "3e7b6fa0-4670-57ea-b2ea-7f83f3bd6730", 00:18:01.220 "is_configured": true, 00:18:01.220 "data_offset": 2048, 00:18:01.220 "data_size": 63488 00:18:01.220 }, 00:18:01.220 { 00:18:01.220 "name": "pt3", 00:18:01.220 "uuid": "ed73f836-8232-5a9c-871d-f4fa9929d280", 00:18:01.220 "is_configured": true, 00:18:01.220 "data_offset": 2048, 00:18:01.220 "data_size": 63488 00:18:01.220 }, 00:18:01.220 { 00:18:01.220 "name": "pt4", 00:18:01.220 "uuid": "8b26ae6b-6740-5684-91f1-4cf62e51bea7", 00:18:01.220 "is_configured": true, 00:18:01.220 "data_offset": 2048, 00:18:01.220 "data_size": 63488 00:18:01.220 } 00:18:01.220 ] 00:18:01.220 }' 00:18:01.220 04:55:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.220 04:55:15 -- common/autotest_common.sh@10 -- # set +x 00:18:01.817 04:55:15 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:01.817 04:55:15 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:02.075 [2024-05-15 04:55:16.066614] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.075 04:55:16 -- bdev/bdev_raid.sh@506 -- # '[' f5a6e51a-5bb1-4059-bb80-2c134539e419 '!=' f5a6e51a-5bb1-4059-bb80-2c134539e419 ']' 00:18:02.075 04:55:16 -- bdev/bdev_raid.sh@511 -- # killprocess 56106 00:18:02.075 04:55:16 -- common/autotest_common.sh@926 -- # '[' -z 56106 ']' 00:18:02.075 04:55:16 -- common/autotest_common.sh@930 -- # kill -0 56106 00:18:02.075 04:55:16 -- common/autotest_common.sh@931 -- # uname 00:18:02.075 04:55:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:02.075 04:55:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56106 00:18:02.075 04:55:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:02.075 killing process with pid 56106 00:18:02.075 04:55:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:02.075 04:55:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56106' 00:18:02.075 04:55:16 -- common/autotest_common.sh@945 -- # kill 56106 00:18:02.075 04:55:16 -- common/autotest_common.sh@950 -- # wait 56106 00:18:02.075 [2024-05-15 04:55:16.111093] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.075 [2024-05-15 04:55:16.111148] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.075 [2024-05-15 04:55:16.111197] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.075 [2024-05-15 04:55:16.111206] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003de80 name raid_bdev1, state offline 00:18:02.333 [2024-05-15 04:55:16.499568] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.235 04:55:17 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:04.235 00:18:04.235 real 0m19.648s 00:18:04.235 user 0m34.672s 00:18:04.235 sys 0m2.488s 00:18:04.235 ************************************ 00:18:04.235 END TEST raid_superblock_test 00:18:04.235 ************************************ 00:18:04.235 04:55:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:04.235 04:55:17 -- common/autotest_common.sh@10 -- # set +x 00:18:04.235 04:55:17 -- bdev/bdev_raid.sh@733 -- # '[' '' = true ']' 00:18:04.235 04:55:17 -- bdev/bdev_raid.sh@742 -- # '[' n == y ']' 00:18:04.235 04:55:17 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:18:04.235 ************************************ 00:18:04.235 END TEST bdev_raid 00:18:04.235 ************************************ 00:18:04.235 00:18:04.235 real 5m30.640s 00:18:04.235 user 9m4.790s 00:18:04.235 sys 0m44.033s 00:18:04.235 04:55:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:04.235 04:55:17 -- common/autotest_common.sh@10 -- # set +x 00:18:04.235 04:55:18 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:04.235 04:55:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:04.235 04:55:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:04.235 04:55:18 -- common/autotest_common.sh@10 -- # set +x 00:18:04.235 ************************************ 00:18:04.235 START TEST bdevperf_config 00:18:04.235 ************************************ 00:18:04.235 04:55:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:04.235 * Looking for test storage... 00:18:04.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:18:04.235 04:55:18 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:18:04.235 04:55:18 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:18:04.235 04:55:18 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:18:04.235 04:55:18 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:04.235 04:55:18 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.235 04:55:18 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:18:04.235 04:55:18 -- bdevperf/common.sh@8 -- # local job_section=global 00:18:04.235 04:55:18 -- bdevperf/common.sh@9 -- # local rw=read 00:18:04.235 04:55:18 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:04.235 04:55:18 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:04.235 04:55:18 -- bdevperf/common.sh@13 -- # cat 00:18:04.235 04:55:18 -- bdevperf/common.sh@18 -- # job='[global]' 00:18:04.235 04:55:18 -- bdevperf/common.sh@19 -- # echo 00:18:04.235 00:18:04.235 04:55:18 -- bdevperf/common.sh@20 -- # cat 00:18:04.235 04:55:18 -- bdevperf/test_config.sh@18 -- # create_job job0 00:18:04.235 04:55:18 -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:04.235 04:55:18 -- bdevperf/common.sh@9 -- # local rw= 00:18:04.235 04:55:18 -- bdevperf/common.sh@10 -- # local filename= 00:18:04.235 04:55:18 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:04.235 04:55:18 -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:04.235 04:55:18 -- bdevperf/common.sh@19 -- # echo 00:18:04.235 00:18:04.235 04:55:18 -- bdevperf/common.sh@20 -- # cat 00:18:04.235 04:55:18 -- bdevperf/test_config.sh@19 -- # create_job job1 00:18:04.235 04:55:18 -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:04.235 04:55:18 -- bdevperf/common.sh@9 -- # local rw= 00:18:04.235 04:55:18 -- bdevperf/common.sh@10 -- # local filename= 00:18:04.235 04:55:18 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:04.235 04:55:18 -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:04.235 00:18:04.235 04:55:18 -- bdevperf/common.sh@19 -- # echo 00:18:04.235 04:55:18 -- bdevperf/common.sh@20 -- # cat 00:18:04.235 04:55:18 -- bdevperf/test_config.sh@20 -- # create_job job2 00:18:04.235 04:55:18 -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:04.235 04:55:18 -- bdevperf/common.sh@9 -- # local rw= 00:18:04.235 04:55:18 -- bdevperf/common.sh@10 -- # local filename= 00:18:04.235 04:55:18 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:04.235 04:55:18 -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:04.235 00:18:04.235 04:55:18 -- bdevperf/common.sh@19 -- # echo 00:18:04.235 04:55:18 -- bdevperf/common.sh@20 -- # cat 00:18:04.235 04:55:18 -- bdevperf/test_config.sh@21 -- # create_job job3 00:18:04.235 00:18:04.235 04:55:18 -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:04.235 04:55:18 -- bdevperf/common.sh@9 -- # local rw= 00:18:04.235 04:55:18 -- bdevperf/common.sh@10 -- # local filename= 00:18:04.235 04:55:18 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:04.235 04:55:18 -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:04.235 04:55:18 -- bdevperf/common.sh@19 -- # echo 00:18:04.235 04:55:18 -- bdevperf/common.sh@20 -- # cat 00:18:04.235 04:55:18 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:09.499 04:55:23 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-05-15 04:55:18.314947] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:09.499 [2024-05-15 04:55:18.315191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56822 ] 00:18:09.499 Using job config with 4 jobs 00:18:09.499 [2024-05-15 04:55:18.484775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.499 [2024-05-15 04:55:18.719317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.499 cpumask for '\''job0'\'' is too big 00:18:09.499 cpumask for '\''job1'\'' is too big 00:18:09.499 cpumask for '\''job2'\'' is too big 00:18:09.499 cpumask for '\''job3'\'' is too big 00:18:09.499 Running I/O for 2 seconds... 00:18:09.499 00:18:09.499 Latency(us) 00:18:09.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.499 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.499 Malloc0 : 2.00 109379.54 106.82 0.00 0.00 2339.40 538.33 3900.95 00:18:09.499 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.499 Malloc0 : 2.00 109361.91 106.80 0.00 0.00 2338.44 481.77 3417.23 00:18:09.500 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.500 Malloc0 : 2.01 109411.20 106.85 0.00 0.00 2336.11 511.02 2995.93 00:18:09.500 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.500 Malloc0 : 2.01 109394.40 106.83 0.00 0.00 2335.27 470.06 2995.93 00:18:09.500 =================================================================================================================== 00:18:09.500 Total : 437547.05 427.29 0.00 0.00 2337.31 470.06 3900.95' 00:18:09.500 04:55:23 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-05-15 04:55:18.314947] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:09.500 [2024-05-15 04:55:18.315191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56822 ] 00:18:09.500 Using job config with 4 jobs 00:18:09.500 [2024-05-15 04:55:18.484775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.500 [2024-05-15 04:55:18.719317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.500 cpumask for '\''job0'\'' is too big 00:18:09.500 cpumask for '\''job1'\'' is too big 00:18:09.500 cpumask for '\''job2'\'' is too big 00:18:09.500 cpumask for '\''job3'\'' is too big 00:18:09.500 Running I/O for 2 seconds... 00:18:09.500 00:18:09.500 Latency(us) 00:18:09.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.500 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.500 Malloc0 : 2.00 109379.54 106.82 0.00 0.00 2339.40 538.33 3900.95 00:18:09.500 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.500 Malloc0 : 2.00 109361.91 106.80 0.00 0.00 2338.44 481.77 3417.23 00:18:09.500 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.500 Malloc0 : 2.01 109411.20 106.85 0.00 0.00 2336.11 511.02 2995.93 00:18:09.500 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.500 Malloc0 : 2.01 109394.40 106.83 0.00 0.00 2335.27 470.06 2995.93 00:18:09.500 =================================================================================================================== 00:18:09.500 Total : 437547.05 427.29 0.00 0.00 2337.31 470.06 3900.95' 00:18:09.500 04:55:23 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:09.500 04:55:23 -- bdevperf/common.sh@32 -- # echo '[2024-05-15 04:55:18.314947] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:09.500 [2024-05-15 04:55:18.315191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56822 ] 00:18:09.500 Using job config with 4 jobs 00:18:09.500 [2024-05-15 04:55:18.484775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.500 [2024-05-15 04:55:18.719317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.500 cpumask for '\''job0'\'' is too big 00:18:09.500 cpumask for '\''job1'\'' is too big 00:18:09.500 cpumask for '\''job2'\'' is too big 00:18:09.500 cpumask for '\''job3'\'' is too big 00:18:09.500 Running I/O for 2 seconds... 00:18:09.500 00:18:09.500 Latency(us) 00:18:09.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.500 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.500 Malloc0 : 2.00 109379.54 106.82 0.00 0.00 2339.40 538.33 3900.95 00:18:09.500 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.500 Malloc0 : 2.00 109361.91 106.80 0.00 0.00 2338.44 481.77 3417.23 00:18:09.500 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.500 Malloc0 : 2.01 109411.20 106.85 0.00 0.00 2336.11 511.02 2995.93 00:18:09.500 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:09.500 Malloc0 : 2.01 109394.40 106.83 0.00 0.00 2335.27 470.06 2995.93 00:18:09.500 =================================================================================================================== 00:18:09.500 Total : 437547.05 427.29 0.00 0.00 2337.31 470.06 3900.95' 00:18:09.500 04:55:23 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:09.500 04:55:23 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:18:09.500 04:55:23 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:09.500 [2024-05-15 04:55:23.454845] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:09.500 [2024-05-15 04:55:23.455026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56892 ] 00:18:09.500 [2024-05-15 04:55:23.605663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.758 [2024-05-15 04:55:23.848938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.323 cpumask for 'job0' is too big 00:18:10.323 cpumask for 'job1' is too big 00:18:10.323 cpumask for 'job2' is too big 00:18:10.323 cpumask for 'job3' is too big 00:18:14.530 04:55:28 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:18:14.530 Running I/O for 2 seconds... 00:18:14.530 00:18:14.530 Latency(us) 00:18:14.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.530 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:14.530 Malloc0 : 2.00 109635.81 107.07 0.00 0.00 2334.15 530.53 4025.78 00:18:14.530 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:14.530 Malloc0 : 2.00 109620.05 107.05 0.00 0.00 2332.92 477.87 3479.65 00:18:14.530 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:14.530 Malloc0 : 2.01 109668.78 107.10 0.00 0.00 2330.51 497.37 2933.52 00:18:14.531 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:14.531 Malloc0 : 2.01 109652.59 107.08 0.00 0.00 2329.55 475.92 2652.65 00:18:14.531 =================================================================================================================== 00:18:14.531 Total : 438577.24 428.30 0.00 0.00 2331.78 475.92 4025.78' 00:18:14.531 04:55:28 -- bdevperf/test_config.sh@27 -- # cleanup 00:18:14.531 04:55:28 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:14.531 04:55:28 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:18:14.531 04:55:28 -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:14.531 04:55:28 -- bdevperf/common.sh@9 -- # local rw=write 00:18:14.531 04:55:28 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:14.531 04:55:28 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:14.531 00:18:14.531 04:55:28 -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:14.531 04:55:28 -- bdevperf/common.sh@19 -- # echo 00:18:14.531 04:55:28 -- bdevperf/common.sh@20 -- # cat 00:18:14.531 04:55:28 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:18:14.531 04:55:28 -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:14.531 04:55:28 -- bdevperf/common.sh@9 -- # local rw=write 00:18:14.531 04:55:28 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:14.531 04:55:28 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:14.531 04:55:28 -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:14.531 00:18:14.531 04:55:28 -- bdevperf/common.sh@19 -- # echo 00:18:14.531 04:55:28 -- bdevperf/common.sh@20 -- # cat 00:18:14.531 04:55:28 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:18:14.531 04:55:28 -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:14.531 04:55:28 -- bdevperf/common.sh@9 -- # local rw=write 00:18:14.531 04:55:28 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:14.531 04:55:28 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:14.531 00:18:14.531 04:55:28 -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:14.531 04:55:28 -- bdevperf/common.sh@19 -- # echo 00:18:14.531 04:55:28 -- bdevperf/common.sh@20 -- # cat 00:18:14.531 04:55:28 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:19.792 04:55:33 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-05-15 04:55:28.602218] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:19.792 [2024-05-15 04:55:28.602379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56955 ] 00:18:19.792 Using job config with 3 jobs 00:18:19.792 [2024-05-15 04:55:28.753258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.792 [2024-05-15 04:55:29.020153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.792 cpumask for '\''job0'\'' is too big 00:18:19.792 cpumask for '\''job1'\'' is too big 00:18:19.792 cpumask for '\''job2'\'' is too big 00:18:19.792 Running I/O for 2 seconds... 00:18:19.792 00:18:19.792 Latency(us) 00:18:19.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.792 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:19.792 Malloc0 : 2.00 145187.32 141.78 0.00 0.00 1762.23 522.73 2761.87 00:18:19.792 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:19.792 Malloc0 : 2.00 145166.59 141.76 0.00 0.00 1761.37 483.72 2293.76 00:18:19.792 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:19.792 Malloc0 : 2.00 145225.51 141.82 0.00 0.00 1759.74 220.40 2293.76 00:18:19.792 =================================================================================================================== 00:18:19.792 Total : 435579.41 425.37 0.00 0.00 1761.11 220.40 2761.87' 00:18:19.792 04:55:33 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-05-15 04:55:28.602218] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:19.792 [2024-05-15 04:55:28.602379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56955 ] 00:18:19.792 Using job config with 3 jobs 00:18:19.792 [2024-05-15 04:55:28.753258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.792 [2024-05-15 04:55:29.020153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.792 cpumask for '\''job0'\'' is too big 00:18:19.793 cpumask for '\''job1'\'' is too big 00:18:19.793 cpumask for '\''job2'\'' is too big 00:18:19.793 Running I/O for 2 seconds... 00:18:19.793 00:18:19.793 Latency(us) 00:18:19.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.793 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:19.793 Malloc0 : 2.00 145187.32 141.78 0.00 0.00 1762.23 522.73 2761.87 00:18:19.793 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:19.793 Malloc0 : 2.00 145166.59 141.76 0.00 0.00 1761.37 483.72 2293.76 00:18:19.793 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:19.793 Malloc0 : 2.00 145225.51 141.82 0.00 0.00 1759.74 220.40 2293.76 00:18:19.793 =================================================================================================================== 00:18:19.793 Total : 435579.41 425.37 0.00 0.00 1761.11 220.40 2761.87' 00:18:19.793 04:55:33 -- bdevperf/common.sh@32 -- # echo '[2024-05-15 04:55:28.602218] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:19.793 [2024-05-15 04:55:28.602379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56955 ] 00:18:19.793 Using job config with 3 jobs 00:18:19.793 [2024-05-15 04:55:28.753258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.793 [2024-05-15 04:55:29.020153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.793 cpumask for '\''job0'\'' is too big 00:18:19.793 cpumask for '\''job1'\'' is too big 00:18:19.793 cpumask for '\''job2'\'' is too big 00:18:19.793 Running I/O for 2 seconds... 00:18:19.793 00:18:19.793 Latency(us) 00:18:19.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.793 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:19.793 Malloc0 : 2.00 145187.32 141.78 0.00 0.00 1762.23 522.73 2761.87 00:18:19.793 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:19.793 Malloc0 : 2.00 145166.59 141.76 0.00 0.00 1761.37 483.72 2293.76 00:18:19.793 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:19.793 Malloc0 : 2.00 145225.51 141.82 0.00 0.00 1759.74 220.40 2293.76 00:18:19.793 =================================================================================================================== 00:18:19.793 Total : 435579.41 425.37 0.00 0.00 1761.11 220.40 2761.87' 00:18:19.793 04:55:33 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:19.793 04:55:33 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:19.793 04:55:33 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:18:19.793 04:55:33 -- bdevperf/test_config.sh@35 -- # cleanup 00:18:19.793 04:55:33 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:19.793 04:55:33 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:18:19.793 04:55:33 -- bdevperf/common.sh@8 -- # local job_section=global 00:18:19.793 04:55:33 -- bdevperf/common.sh@9 -- # local rw=rw 00:18:19.793 04:55:33 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:18:19.793 04:55:33 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:19.793 04:55:33 -- bdevperf/common.sh@13 -- # cat 00:18:19.793 00:18:19.793 04:55:33 -- bdevperf/common.sh@18 -- # job='[global]' 00:18:19.793 04:55:33 -- bdevperf/common.sh@19 -- # echo 00:18:19.793 04:55:33 -- bdevperf/common.sh@20 -- # cat 00:18:19.793 00:18:19.793 04:55:33 -- bdevperf/test_config.sh@38 -- # create_job job0 00:18:19.793 04:55:33 -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:19.793 04:55:33 -- bdevperf/common.sh@9 -- # local rw= 00:18:19.793 04:55:33 -- bdevperf/common.sh@10 -- # local filename= 00:18:19.793 04:55:33 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:19.793 04:55:33 -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:19.793 04:55:33 -- bdevperf/common.sh@19 -- # echo 00:18:19.793 04:55:33 -- bdevperf/common.sh@20 -- # cat 00:18:19.793 04:55:33 -- bdevperf/test_config.sh@39 -- # create_job job1 00:18:19.793 04:55:33 -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:19.793 04:55:33 -- bdevperf/common.sh@9 -- # local rw= 00:18:19.793 00:18:19.793 04:55:33 -- bdevperf/common.sh@10 -- # local filename= 00:18:19.793 04:55:33 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:19.793 04:55:33 -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:19.793 04:55:33 -- bdevperf/common.sh@19 -- # echo 00:18:19.793 04:55:33 -- bdevperf/common.sh@20 -- # cat 00:18:19.793 04:55:33 -- bdevperf/test_config.sh@40 -- # create_job job2 00:18:19.793 04:55:33 -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:19.793 04:55:33 -- bdevperf/common.sh@9 -- # local rw= 00:18:19.793 00:18:19.793 04:55:33 -- bdevperf/common.sh@10 -- # local filename= 00:18:19.793 04:55:33 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:19.793 04:55:33 -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:19.793 04:55:33 -- bdevperf/common.sh@19 -- # echo 00:18:19.793 04:55:33 -- bdevperf/common.sh@20 -- # cat 00:18:19.793 04:55:33 -- bdevperf/test_config.sh@41 -- # create_job job3 00:18:19.793 04:55:33 -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:19.793 00:18:19.793 04:55:33 -- bdevperf/common.sh@9 -- # local rw= 00:18:19.793 04:55:33 -- bdevperf/common.sh@10 -- # local filename= 00:18:19.793 04:55:33 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:19.793 04:55:33 -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:19.793 04:55:33 -- bdevperf/common.sh@19 -- # echo 00:18:19.793 04:55:33 -- bdevperf/common.sh@20 -- # cat 00:18:19.793 04:55:33 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:25.073 04:55:38 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-05-15 04:55:33.752997] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:25.073 [2024-05-15 04:55:33.753167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57033 ] 00:18:25.073 Using job config with 4 jobs 00:18:25.073 [2024-05-15 04:55:33.907517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.073 [2024-05-15 04:55:34.153640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.073 cpumask for '\''job0'\'' is too big 00:18:25.073 cpumask for '\''job1'\'' is too big 00:18:25.073 cpumask for '\''job2'\'' is too big 00:18:25.073 cpumask for '\''job3'\'' is too big 00:18:25.073 Running I/O for 2 seconds... 00:18:25.073 00:18:25.073 Latency(us) 00:18:25.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.073 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.073 Malloc0 : 2.01 53143.71 51.90 0.00 0.00 4815.63 1154.68 8613.30 00:18:25.073 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.073 Malloc1 : 2.01 53134.40 51.89 0.00 0.00 4814.94 1302.92 8613.30 00:18:25.073 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.073 Malloc0 : 2.01 53126.84 51.88 0.00 0.00 4810.86 1076.66 7489.83 00:18:25.073 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.073 Malloc1 : 2.01 53117.35 51.87 0.00 0.00 4810.07 1185.89 7489.83 00:18:25.073 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.073 Malloc0 : 2.01 53109.89 51.87 0.00 0.00 4806.56 1076.66 6366.35 00:18:25.073 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.073 Malloc1 : 2.01 53101.28 51.86 0.00 0.00 4806.24 1224.90 6303.94 00:18:25.073 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.073 Malloc0 : 2.01 53185.50 51.94 0.00 0.00 4794.42 881.62 5242.88 00:18:25.073 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.073 Malloc1 : 2.01 53176.75 51.93 0.00 0.00 4793.88 573.44 5242.88 00:18:25.073 =================================================================================================================== 00:18:25.073 Total : 425095.71 415.13 0.00 0.00 4806.57 573.44 8613.30' 00:18:25.073 04:55:38 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-05-15 04:55:33.752997] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:25.073 [2024-05-15 04:55:33.753167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57033 ] 00:18:25.073 Using job config with 4 jobs 00:18:25.073 [2024-05-15 04:55:33.907517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.073 [2024-05-15 04:55:34.153640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.073 cpumask for '\''job0'\'' is too big 00:18:25.073 cpumask for '\''job1'\'' is too big 00:18:25.073 cpumask for '\''job2'\'' is too big 00:18:25.073 cpumask for '\''job3'\'' is too big 00:18:25.073 Running I/O for 2 seconds... 00:18:25.073 00:18:25.074 Latency(us) 00:18:25.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.074 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc0 : 2.01 53143.71 51.90 0.00 0.00 4815.63 1154.68 8613.30 00:18:25.074 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc1 : 2.01 53134.40 51.89 0.00 0.00 4814.94 1302.92 8613.30 00:18:25.074 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc0 : 2.01 53126.84 51.88 0.00 0.00 4810.86 1076.66 7489.83 00:18:25.074 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc1 : 2.01 53117.35 51.87 0.00 0.00 4810.07 1185.89 7489.83 00:18:25.074 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc0 : 2.01 53109.89 51.87 0.00 0.00 4806.56 1076.66 6366.35 00:18:25.074 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc1 : 2.01 53101.28 51.86 0.00 0.00 4806.24 1224.90 6303.94 00:18:25.074 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc0 : 2.01 53185.50 51.94 0.00 0.00 4794.42 881.62 5242.88 00:18:25.074 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc1 : 2.01 53176.75 51.93 0.00 0.00 4793.88 573.44 5242.88 00:18:25.074 =================================================================================================================== 00:18:25.074 Total : 425095.71 415.13 0.00 0.00 4806.57 573.44 8613.30' 00:18:25.074 04:55:38 -- bdevperf/common.sh@32 -- # echo '[2024-05-15 04:55:33.752997] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:25.074 [2024-05-15 04:55:33.753167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57033 ] 00:18:25.074 Using job config with 4 jobs 00:18:25.074 [2024-05-15 04:55:33.907517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.074 [2024-05-15 04:55:34.153640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.074 cpumask for '\''job0'\'' is too big 00:18:25.074 cpumask for '\''job1'\'' is too big 00:18:25.074 cpumask for '\''job2'\'' is too big 00:18:25.074 cpumask for '\''job3'\'' is too big 00:18:25.074 Running I/O for 2 seconds... 00:18:25.074 00:18:25.074 Latency(us) 00:18:25.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.074 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc0 : 2.01 53143.71 51.90 0.00 0.00 4815.63 1154.68 8613.30 00:18:25.074 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc1 : 2.01 53134.40 51.89 0.00 0.00 4814.94 1302.92 8613.30 00:18:25.074 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc0 : 2.01 53126.84 51.88 0.00 0.00 4810.86 1076.66 7489.83 00:18:25.074 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc1 : 2.01 53117.35 51.87 0.00 0.00 4810.07 1185.89 7489.83 00:18:25.074 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc0 : 2.01 53109.89 51.87 0.00 0.00 4806.56 1076.66 6366.35 00:18:25.074 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc1 : 2.01 53101.28 51.86 0.00 0.00 4806.24 1224.90 6303.94 00:18:25.074 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc0 : 2.01 53185.50 51.94 0.00 0.00 4794.42 881.62 5242.88 00:18:25.074 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:25.074 Malloc1 : 2.01 53176.75 51.93 0.00 0.00 4793.88 573.44 5242.88 00:18:25.074 =================================================================================================================== 00:18:25.074 Total : 425095.71 415.13 0.00 0.00 4806.57 573.44 8613.30' 00:18:25.074 04:55:38 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:25.074 04:55:38 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:25.074 04:55:38 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:18:25.074 04:55:38 -- bdevperf/test_config.sh@44 -- # cleanup 00:18:25.074 04:55:38 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:25.074 04:55:38 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:25.074 00:18:25.074 real 0m20.732s 00:18:25.074 user 0m18.287s 00:18:25.074 sys 0m1.618s 00:18:25.074 ************************************ 00:18:25.074 END TEST bdevperf_config 00:18:25.074 ************************************ 00:18:25.074 04:55:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.074 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:18:25.074 04:55:38 -- spdk/autotest.sh@198 -- # uname -s 00:18:25.074 04:55:38 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:18:25.074 04:55:38 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:18:25.074 04:55:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:25.074 04:55:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:25.074 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:18:25.074 ************************************ 00:18:25.074 START TEST reactor_set_interrupt 00:18:25.074 ************************************ 00:18:25.074 04:55:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:18:25.074 * Looking for test storage... 00:18:25.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:18:25.074 04:55:38 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:18:25.074 04:55:38 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:18:25.074 04:55:38 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:18:25.074 04:55:38 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:18:25.074 04:55:38 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:18:25.074 04:55:38 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:25.074 04:55:38 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:18:25.074 04:55:38 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:25.074 04:55:38 -- common/autotest_common.sh@34 -- # set -e 00:18:25.074 04:55:38 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:25.074 04:55:38 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:25.074 04:55:38 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:25.074 04:55:38 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:25.074 04:55:38 -- common/build_config.sh@1 -- # CONFIG_RDMA=y 00:18:25.074 04:55:38 -- common/build_config.sh@2 -- # CONFIG_UNIT_TESTS=y 00:18:25.074 04:55:38 -- common/build_config.sh@3 -- # CONFIG_GOLANG=n 00:18:25.074 04:55:38 -- common/build_config.sh@4 -- # CONFIG_FUSE=n 00:18:25.074 04:55:38 -- common/build_config.sh@5 -- # CONFIG_ISAL=n 00:18:25.074 04:55:38 -- common/build_config.sh@6 -- # CONFIG_VTUNE_DIR= 00:18:25.074 04:55:38 -- common/build_config.sh@7 -- # CONFIG_CUSTOMOCF=n 00:18:25.074 04:55:38 -- common/build_config.sh@8 -- # CONFIG_IPSEC_MB_DIR= 00:18:25.074 04:55:38 -- common/build_config.sh@9 -- # CONFIG_VBDEV_COMPRESS=n 00:18:25.074 04:55:38 -- common/build_config.sh@10 -- # CONFIG_OCF_PATH= 00:18:25.074 04:55:38 -- common/build_config.sh@11 -- # CONFIG_SHARED=n 00:18:25.074 04:55:38 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:18:25.074 04:55:38 -- common/build_config.sh@13 -- # CONFIG_TESTS=y 00:18:25.074 04:55:38 -- common/build_config.sh@14 -- # CONFIG_APPS=y 00:18:25.074 04:55:38 -- common/build_config.sh@15 -- # CONFIG_ISAL_CRYPTO=n 00:18:25.074 04:55:38 -- common/build_config.sh@16 -- # CONFIG_LIBDIR= 00:18:25.074 04:55:38 -- common/build_config.sh@17 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:25.074 04:55:38 -- common/build_config.sh@18 -- # CONFIG_DAOS_DIR= 00:18:25.074 04:55:38 -- common/build_config.sh@19 -- # CONFIG_ISCSI_INITIATOR=n 00:18:25.074 04:55:38 -- common/build_config.sh@20 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:25.074 04:55:38 -- common/build_config.sh@21 -- # CONFIG_ASAN=y 00:18:25.074 04:55:38 -- common/build_config.sh@22 -- # CONFIG_LTO=n 00:18:25.074 04:55:38 -- common/build_config.sh@23 -- # CONFIG_CET=n 00:18:25.074 04:55:38 -- common/build_config.sh@24 -- # CONFIG_FUZZER=n 00:18:25.074 04:55:38 -- common/build_config.sh@25 -- # CONFIG_USDT=n 00:18:25.074 04:55:38 -- common/build_config.sh@26 -- # CONFIG_VTUNE=n 00:18:25.074 04:55:38 -- common/build_config.sh@27 -- # CONFIG_VHOST=y 00:18:25.074 04:55:38 -- common/build_config.sh@28 -- # CONFIG_WPDK_DIR= 00:18:25.074 04:55:38 -- common/build_config.sh@29 -- # CONFIG_UBLK=n 00:18:25.074 04:55:38 -- common/build_config.sh@30 -- # CONFIG_URING=n 00:18:25.074 04:55:38 -- common/build_config.sh@31 -- # CONFIG_SMA=n 00:18:25.074 04:55:38 -- common/build_config.sh@32 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:25.074 04:55:38 -- common/build_config.sh@33 -- # CONFIG_IDXD_KERNEL=n 00:18:25.074 04:55:38 -- common/build_config.sh@34 -- # CONFIG_FC_PATH= 00:18:25.074 04:55:38 -- common/build_config.sh@35 -- # CONFIG_PREFIX=/usr/local 00:18:25.074 04:55:38 -- common/build_config.sh@36 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:18:25.074 04:55:38 -- common/build_config.sh@37 -- # CONFIG_XNVME=n 00:18:25.074 04:55:38 -- common/build_config.sh@38 -- # CONFIG_RDMA_PROV=verbs 00:18:25.074 04:55:38 -- common/build_config.sh@39 -- # CONFIG_RDMA_SET_TOS=y 00:18:25.074 04:55:38 -- common/build_config.sh@40 -- # CONFIG_FUZZER_LIB= 00:18:25.074 04:55:38 -- common/build_config.sh@41 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:25.074 04:55:38 -- common/build_config.sh@42 -- # CONFIG_ARCH=native 00:18:25.074 04:55:38 -- common/build_config.sh@43 -- # CONFIG_PGO_CAPTURE=n 00:18:25.074 04:55:38 -- common/build_config.sh@44 -- # CONFIG_DAOS=y 00:18:25.074 04:55:38 -- common/build_config.sh@45 -- # CONFIG_WERROR=y 00:18:25.074 04:55:38 -- common/build_config.sh@46 -- # CONFIG_DEBUG=y 00:18:25.074 04:55:38 -- common/build_config.sh@47 -- # CONFIG_AVAHI=n 00:18:25.074 04:55:38 -- common/build_config.sh@48 -- # CONFIG_CROSS_PREFIX= 00:18:25.074 04:55:38 -- common/build_config.sh@49 -- # CONFIG_PGO_USE=n 00:18:25.074 04:55:38 -- common/build_config.sh@50 -- # CONFIG_CRYPTO=n 00:18:25.074 04:55:38 -- common/build_config.sh@51 -- # CONFIG_HAVE_ARC4RANDOM=n 00:18:25.075 04:55:38 -- common/build_config.sh@52 -- # CONFIG_OPENSSL_PATH= 00:18:25.075 04:55:38 -- common/build_config.sh@53 -- # CONFIG_EXAMPLES=y 00:18:25.075 04:55:38 -- common/build_config.sh@54 -- # CONFIG_DPDK_INC_DIR= 00:18:25.075 04:55:38 -- common/build_config.sh@55 -- # CONFIG_MAX_LCORES= 00:18:25.075 04:55:38 -- common/build_config.sh@56 -- # CONFIG_VIRTIO=y 00:18:25.075 04:55:38 -- common/build_config.sh@57 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:25.075 04:55:38 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB=n 00:18:25.075 04:55:38 -- common/build_config.sh@59 -- # CONFIG_UBSAN=n 00:18:25.075 04:55:38 -- common/build_config.sh@60 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:25.075 04:55:38 -- common/build_config.sh@61 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:18:25.075 04:55:38 -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:18:25.075 04:55:38 -- common/build_config.sh@63 -- # CONFIG_URING_PATH= 00:18:25.075 04:55:38 -- common/build_config.sh@64 -- # CONFIG_NVME_CUSE=y 00:18:25.075 04:55:38 -- common/build_config.sh@65 -- # CONFIG_URING_ZNS=n 00:18:25.075 04:55:38 -- common/build_config.sh@66 -- # CONFIG_VFIO_USER=n 00:18:25.075 04:55:38 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:18:25.075 04:55:38 -- common/build_config.sh@68 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:18:25.075 04:55:38 -- common/build_config.sh@69 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:25.075 04:55:38 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:18:25.075 04:55:38 -- common/build_config.sh@71 -- # CONFIG_RAID5F=n 00:18:25.075 04:55:38 -- common/build_config.sh@72 -- # CONFIG_VFIO_USER_DIR= 00:18:25.075 04:55:38 -- common/build_config.sh@73 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:25.075 04:55:38 -- common/build_config.sh@74 -- # CONFIG_TSAN=n 00:18:25.075 04:55:38 -- common/build_config.sh@75 -- # CONFIG_IDXD=y 00:18:25.075 04:55:38 -- common/build_config.sh@76 -- # CONFIG_OCF=n 00:18:25.075 04:55:38 -- common/build_config.sh@77 -- # CONFIG_CRYPTO_MLX5=n 00:18:25.075 04:55:38 -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:18:25.075 04:55:38 -- common/build_config.sh@79 -- # CONFIG_COVERAGE=y 00:18:25.075 04:55:38 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:25.075 04:55:38 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:25.075 04:55:38 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:25.075 04:55:38 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:25.075 04:55:38 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:25.075 04:55:38 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:25.075 04:55:38 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:25.075 04:55:38 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:25.075 04:55:38 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:25.075 04:55:38 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:25.075 04:55:38 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:25.075 04:55:38 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:25.075 04:55:38 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:25.075 04:55:38 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:25.075 04:55:38 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:25.075 04:55:38 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:25.075 #define SPDK_CONFIG_H 00:18:25.075 #define SPDK_CONFIG_APPS 1 00:18:25.075 #define SPDK_CONFIG_ARCH native 00:18:25.075 #define SPDK_CONFIG_ASAN 1 00:18:25.075 #undef SPDK_CONFIG_AVAHI 00:18:25.075 #undef SPDK_CONFIG_CET 00:18:25.075 #define SPDK_CONFIG_COVERAGE 1 00:18:25.075 #define SPDK_CONFIG_CROSS_PREFIX 00:18:25.075 #undef SPDK_CONFIG_CRYPTO 00:18:25.075 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:25.075 #undef SPDK_CONFIG_CUSTOMOCF 00:18:25.075 #define SPDK_CONFIG_DAOS 1 00:18:25.075 #define SPDK_CONFIG_DAOS_DIR 00:18:25.075 #define SPDK_CONFIG_DEBUG 1 00:18:25.075 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:25.075 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:25.075 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:25.075 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:25.075 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:25.075 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:25.075 #define SPDK_CONFIG_EXAMPLES 1 00:18:25.075 #undef SPDK_CONFIG_FC 00:18:25.075 #define SPDK_CONFIG_FC_PATH 00:18:25.075 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:25.075 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:25.075 #undef SPDK_CONFIG_FUSE 00:18:25.075 #undef SPDK_CONFIG_FUZZER 00:18:25.075 #define SPDK_CONFIG_FUZZER_LIB 00:18:25.075 #undef SPDK_CONFIG_GOLANG 00:18:25.075 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:18:25.075 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:25.075 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:25.075 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:25.075 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:18:25.075 #define SPDK_CONFIG_IDXD 1 00:18:25.075 #undef SPDK_CONFIG_IDXD_KERNEL 00:18:25.075 #undef SPDK_CONFIG_IPSEC_MB 00:18:25.075 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:25.075 #undef SPDK_CONFIG_ISAL 00:18:25.075 #undef SPDK_CONFIG_ISAL_CRYPTO 00:18:25.075 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:18:25.075 #define SPDK_CONFIG_LIBDIR 00:18:25.075 #undef SPDK_CONFIG_LTO 00:18:25.075 #define SPDK_CONFIG_MAX_LCORES 00:18:25.075 #define SPDK_CONFIG_NVME_CUSE 1 00:18:25.075 #undef SPDK_CONFIG_OCF 00:18:25.075 #define SPDK_CONFIG_OCF_PATH 00:18:25.075 #define SPDK_CONFIG_OPENSSL_PATH 00:18:25.075 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:25.075 #undef SPDK_CONFIG_PGO_USE 00:18:25.075 #define SPDK_CONFIG_PREFIX /usr/local 00:18:25.075 #undef SPDK_CONFIG_RAID5F 00:18:25.075 #undef SPDK_CONFIG_RBD 00:18:25.075 #define SPDK_CONFIG_RDMA 1 00:18:25.075 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:25.075 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:25.075 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:18:25.075 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:25.075 #undef SPDK_CONFIG_SHARED 00:18:25.075 #undef SPDK_CONFIG_SMA 00:18:25.075 #define SPDK_CONFIG_TESTS 1 00:18:25.075 #undef SPDK_CONFIG_TSAN 00:18:25.075 #undef SPDK_CONFIG_UBLK 00:18:25.075 #undef SPDK_CONFIG_UBSAN 00:18:25.075 #define SPDK_CONFIG_UNIT_TESTS 1 00:18:25.075 #undef SPDK_CONFIG_URING 00:18:25.075 #define SPDK_CONFIG_URING_PATH 00:18:25.075 #undef SPDK_CONFIG_URING_ZNS 00:18:25.075 #undef SPDK_CONFIG_USDT 00:18:25.075 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:25.075 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:25.075 #undef SPDK_CONFIG_VFIO_USER 00:18:25.075 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:25.075 #define SPDK_CONFIG_VHOST 1 00:18:25.075 #define SPDK_CONFIG_VIRTIO 1 00:18:25.075 #undef SPDK_CONFIG_VTUNE 00:18:25.075 #define SPDK_CONFIG_VTUNE_DIR 00:18:25.075 #define SPDK_CONFIG_WERROR 1 00:18:25.075 #define SPDK_CONFIG_WPDK_DIR 00:18:25.075 #undef SPDK_CONFIG_XNVME 00:18:25.075 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:25.075 04:55:38 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:25.075 04:55:38 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:25.075 04:55:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.075 04:55:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.075 04:55:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.075 04:55:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:25.075 04:55:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:25.075 04:55:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:25.075 04:55:38 -- paths/export.sh@5 -- # export PATH 00:18:25.075 04:55:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:25.075 04:55:38 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:25.075 04:55:38 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:25.075 04:55:38 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:25.075 04:55:38 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:25.075 04:55:38 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:18:25.075 04:55:38 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:18:25.075 04:55:38 -- pm/common@16 -- # TEST_TAG=N/A 00:18:25.075 04:55:38 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:18:25.075 04:55:38 -- common/autotest_common.sh@52 -- # : 1 00:18:25.075 04:55:38 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:18:25.075 04:55:38 -- common/autotest_common.sh@56 -- # : 0 00:18:25.075 04:55:38 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:25.075 04:55:38 -- common/autotest_common.sh@58 -- # : 0 00:18:25.075 04:55:38 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:18:25.075 04:55:38 -- common/autotest_common.sh@60 -- # : 1 00:18:25.075 04:55:38 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:25.075 04:55:38 -- common/autotest_common.sh@62 -- # : 1 00:18:25.075 04:55:38 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:18:25.075 04:55:38 -- common/autotest_common.sh@64 -- # : 00:18:25.075 04:55:38 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:18:25.075 04:55:38 -- common/autotest_common.sh@66 -- # : 0 00:18:25.075 04:55:38 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:18:25.075 04:55:38 -- common/autotest_common.sh@68 -- # : 0 00:18:25.075 04:55:38 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:18:25.075 04:55:38 -- common/autotest_common.sh@70 -- # : 0 00:18:25.075 04:55:38 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:18:25.075 04:55:38 -- common/autotest_common.sh@72 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:25.076 04:55:38 -- common/autotest_common.sh@74 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:18:25.076 04:55:38 -- common/autotest_common.sh@76 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:18:25.076 04:55:38 -- common/autotest_common.sh@78 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:18:25.076 04:55:38 -- common/autotest_common.sh@80 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:18:25.076 04:55:38 -- common/autotest_common.sh@82 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:18:25.076 04:55:38 -- common/autotest_common.sh@84 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:18:25.076 04:55:38 -- common/autotest_common.sh@86 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:18:25.076 04:55:38 -- common/autotest_common.sh@88 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:18:25.076 04:55:38 -- common/autotest_common.sh@90 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:25.076 04:55:38 -- common/autotest_common.sh@92 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:18:25.076 04:55:38 -- common/autotest_common.sh@94 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:18:25.076 04:55:38 -- common/autotest_common.sh@96 -- # : rdma 00:18:25.076 04:55:38 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:25.076 04:55:38 -- common/autotest_common.sh@98 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:18:25.076 04:55:38 -- common/autotest_common.sh@100 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:18:25.076 04:55:38 -- common/autotest_common.sh@102 -- # : 1 00:18:25.076 04:55:38 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:18:25.076 04:55:38 -- common/autotest_common.sh@104 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:18:25.076 04:55:38 -- common/autotest_common.sh@106 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:18:25.076 04:55:38 -- common/autotest_common.sh@108 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:18:25.076 04:55:38 -- common/autotest_common.sh@110 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:18:25.076 04:55:38 -- common/autotest_common.sh@112 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:25.076 04:55:38 -- common/autotest_common.sh@114 -- # : 1 00:18:25.076 04:55:38 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:18:25.076 04:55:38 -- common/autotest_common.sh@116 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:18:25.076 04:55:38 -- common/autotest_common.sh@118 -- # : 00:18:25.076 04:55:38 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:25.076 04:55:38 -- common/autotest_common.sh@120 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:18:25.076 04:55:38 -- common/autotest_common.sh@122 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:18:25.076 04:55:38 -- common/autotest_common.sh@124 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:18:25.076 04:55:38 -- common/autotest_common.sh@126 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:18:25.076 04:55:38 -- common/autotest_common.sh@128 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:18:25.076 04:55:38 -- common/autotest_common.sh@130 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:18:25.076 04:55:38 -- common/autotest_common.sh@132 -- # : 00:18:25.076 04:55:38 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:18:25.076 04:55:38 -- common/autotest_common.sh@134 -- # : true 00:18:25.076 04:55:38 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:18:25.076 04:55:38 -- common/autotest_common.sh@136 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:18:25.076 04:55:38 -- common/autotest_common.sh@138 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:18:25.076 04:55:38 -- common/autotest_common.sh@140 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:18:25.076 04:55:38 -- common/autotest_common.sh@142 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:18:25.076 04:55:38 -- common/autotest_common.sh@144 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:18:25.076 04:55:38 -- common/autotest_common.sh@146 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:18:25.076 04:55:38 -- common/autotest_common.sh@148 -- # : 00:18:25.076 04:55:38 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:18:25.076 04:55:38 -- common/autotest_common.sh@150 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:18:25.076 04:55:38 -- common/autotest_common.sh@152 -- # : 1 00:18:25.076 04:55:38 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:18:25.076 04:55:38 -- common/autotest_common.sh@154 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:18:25.076 04:55:38 -- common/autotest_common.sh@156 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:18:25.076 04:55:38 -- common/autotest_common.sh@158 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:18:25.076 04:55:38 -- common/autotest_common.sh@160 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:18:25.076 04:55:38 -- common/autotest_common.sh@163 -- # : 00:18:25.076 04:55:38 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:18:25.076 04:55:38 -- common/autotest_common.sh@165 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:18:25.076 04:55:38 -- common/autotest_common.sh@167 -- # : 0 00:18:25.076 04:55:38 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:25.076 04:55:38 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:25.076 04:55:38 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:25.076 04:55:38 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:25.076 04:55:38 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:25.076 04:55:38 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:25.076 04:55:38 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:25.076 04:55:38 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:25.076 04:55:38 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:25.076 04:55:38 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:25.076 04:55:38 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:25.076 04:55:38 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:25.076 04:55:38 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:25.076 04:55:38 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:25.076 04:55:38 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:18:25.076 04:55:38 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:25.076 04:55:38 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:25.076 04:55:38 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:25.076 04:55:38 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:25.076 04:55:38 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:25.076 04:55:38 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:18:25.076 04:55:38 -- common/autotest_common.sh@196 -- # cat 00:18:25.076 04:55:38 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:18:25.076 04:55:38 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:25.076 04:55:38 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:25.076 04:55:38 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:25.076 04:55:38 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:25.076 04:55:38 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:18:25.076 04:55:38 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:18:25.076 04:55:38 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:25.076 04:55:38 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:25.076 04:55:38 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:25.076 04:55:38 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:25.077 04:55:38 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:18:25.077 04:55:38 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:18:25.077 04:55:38 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:18:25.077 04:55:38 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:18:25.077 04:55:38 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:25.077 04:55:38 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:25.077 04:55:38 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:25.077 04:55:38 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:25.077 04:55:38 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:18:25.077 04:55:38 -- common/autotest_common.sh@249 -- # export valgrind= 00:18:25.077 04:55:38 -- common/autotest_common.sh@249 -- # valgrind= 00:18:25.077 04:55:38 -- common/autotest_common.sh@255 -- # uname -s 00:18:25.077 04:55:38 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:18:25.077 04:55:38 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:18:25.077 04:55:38 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:18:25.077 04:55:38 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:18:25.077 04:55:38 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:18:25.077 04:55:38 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:18:25.077 04:55:38 -- common/autotest_common.sh@265 -- # MAKE=make 00:18:25.077 04:55:38 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:18:25.077 04:55:38 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:18:25.077 04:55:38 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:18:25.077 04:55:38 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:18:25.077 04:55:38 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:18:25.077 04:55:38 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:18:25.077 04:55:38 -- common/autotest_common.sh@309 -- # [[ -z 57140 ]] 00:18:25.077 04:55:38 -- common/autotest_common.sh@309 -- # kill -0 57140 00:18:25.077 04:55:38 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:18:25.077 04:55:38 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:18:25.077 04:55:38 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:18:25.077 04:55:38 -- common/autotest_common.sh@322 -- # local mount target_dir 00:18:25.077 04:55:38 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:18:25.077 04:55:38 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:18:25.077 04:55:38 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:18:25.077 04:55:38 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:18:25.077 04:55:38 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.PDMR9k 00:18:25.077 04:55:38 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:25.077 04:55:38 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:18:25.077 04:55:38 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:18:25.077 04:55:38 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.PDMR9k/tests/interrupt /tmp/spdk.PDMR9k 00:18:25.077 04:55:39 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:18:25.077 04:55:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:25.077 04:55:39 -- common/autotest_common.sh@318 -- # df -T 00:18:25.077 04:55:39 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267633664 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267633664 00:18:25.077 04:55:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:18:25.077 04:55:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=6295588864 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298181632 00:18:25.077 04:55:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:18:25.077 04:55:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=6277234688 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298181632 00:18:25.077 04:55:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=20946944 00:18:25.077 04:55:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=6298181632 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298181632 00:18:25.077 04:55:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:18:25.077 04:55:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=xfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=14364282880 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=21463302144 00:18:25.077 04:55:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=7099019264 00:18:25.077 04:55:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=1259638784 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1259638784 00:18:25.077 04:55:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:18:25.077 04:55:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:18:25.077 04:55:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=97242509312 00:18:25.077 04:55:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:18:25.077 04:55:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=2460270592 00:18:25.077 04:55:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:25.077 04:55:39 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:18:25.077 * Looking for test storage... 00:18:25.077 04:55:39 -- common/autotest_common.sh@359 -- # local target_space new_size 00:18:25.077 04:55:39 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:18:25.077 04:55:39 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:18:25.077 04:55:39 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:25.077 04:55:39 -- common/autotest_common.sh@363 -- # mount=/ 00:18:25.077 04:55:39 -- common/autotest_common.sh@365 -- # target_space=14364282880 00:18:25.077 04:55:39 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:18:25.077 04:55:39 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:18:25.077 04:55:39 -- common/autotest_common.sh@371 -- # [[ xfs == tmpfs ]] 00:18:25.077 04:55:39 -- common/autotest_common.sh@371 -- # [[ xfs == ramfs ]] 00:18:25.077 04:55:39 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:18:25.077 04:55:39 -- common/autotest_common.sh@372 -- # new_size=9313611776 00:18:25.077 04:55:39 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:18:25.077 04:55:39 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:18:25.077 04:55:39 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:18:25.077 04:55:39 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:18:25.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:18:25.077 04:55:39 -- common/autotest_common.sh@380 -- # return 0 00:18:25.077 04:55:39 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:18:25.077 04:55:39 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:18:25.077 04:55:39 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:25.077 04:55:39 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:25.077 04:55:39 -- common/autotest_common.sh@1672 -- # true 00:18:25.077 04:55:39 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:18:25.077 04:55:39 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:18:25.077 04:55:39 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:18:25.077 04:55:39 -- common/autotest_common.sh@27 -- # exec 00:18:25.077 04:55:39 -- common/autotest_common.sh@29 -- # exec 00:18:25.077 04:55:39 -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:25.077 04:55:39 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:25.077 04:55:39 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:25.077 04:55:39 -- common/autotest_common.sh@18 -- # set -x 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:18:25.077 04:55:39 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:18:25.077 04:55:39 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:18:25.077 04:55:39 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:18:25.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=57181 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.077 04:55:39 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 57181 /var/tmp/spdk.sock 00:18:25.078 04:55:39 -- common/autotest_common.sh@819 -- # '[' -z 57181 ']' 00:18:25.078 04:55:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.078 04:55:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:25.078 04:55:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.078 04:55:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:25.078 04:55:39 -- common/autotest_common.sh@10 -- # set +x 00:18:25.078 04:55:39 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:18:25.078 [2024-05-15 04:55:39.162982] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:25.078 [2024-05-15 04:55:39.163148] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57181 ] 00:18:25.337 [2024-05-15 04:55:39.316454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:25.337 [2024-05-15 04:55:39.561194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.337 [2024-05-15 04:55:39.561389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.337 [2024-05-15 04:55:39.561393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.905 [2024-05-15 04:55:39.963168] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:25.905 04:55:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:25.905 04:55:39 -- common/autotest_common.sh@852 -- # return 0 00:18:25.905 04:55:39 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:18:25.905 04:55:39 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.164 Malloc0 00:18:26.164 Malloc1 00:18:26.164 Malloc2 00:18:26.164 04:55:40 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:18:26.164 04:55:40 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:18:26.164 04:55:40 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:18:26.164 04:55:40 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:18:26.164 5000+0 records in 00:18:26.164 5000+0 records out 00:18:26.164 10240000 bytes (10 MB) copied, 0.0266779 s, 384 MB/s 00:18:26.164 04:55:40 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:18:26.423 AIO0 00:18:26.423 04:55:40 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 57181 00:18:26.423 04:55:40 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 57181 without_thd 00:18:26.423 04:55:40 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=57181 00:18:26.423 04:55:40 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:18:26.423 04:55:40 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:18:26.423 04:55:40 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:18:26.423 04:55:40 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:18:26.423 04:55:40 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:18:26.423 04:55:40 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:18:26.423 04:55:40 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:18:26.423 04:55:40 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:18:26.423 04:55:40 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:18:26.681 04:55:40 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:18:26.682 04:55:40 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:18:26.682 04:55:40 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:18:26.682 spdk_thread ids are 1 on reactor0. 00:18:26.682 04:55:40 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:18:26.682 04:55:40 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:18:26.682 04:55:40 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:18:26.682 04:55:40 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 57181 0 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 57181 0 idle 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@33 -- # local pid=57181 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57181 -w 256 00:18:26.682 04:55:40 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57181 root 20 0 20.1t 117744 11172 S 0.0 1.0 0:00.88 reactor_0' 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@48 -- # echo 57181 root 20 0 20.1t 117744 11172 S 0.0 1.0 0:00.88 reactor_0 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:26.941 04:55:41 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:18:26.941 04:55:41 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 57181 1 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 57181 1 idle 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@33 -- # local pid=57181 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57181 -w 256 00:18:26.941 04:55:41 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57185 root 20 0 20.1t 117744 11172 S 0.0 1.0 0:00.00 reactor_1' 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@48 -- # echo 57185 root 20 0 20.1t 117744 11172 S 0.0 1.0 0:00.00 reactor_1 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:27.200 04:55:41 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:18:27.200 04:55:41 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 57181 2 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 57181 2 idle 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@33 -- # local pid=57181 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57181 -w 256 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57186 root 20 0 20.1t 117744 11172 S 0.0 1.0 0:00.00 reactor_2' 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@48 -- # echo 57186 root 20 0 20.1t 117744 11172 S 0.0 1.0 0:00.00 reactor_2 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:18:27.200 04:55:41 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:27.200 04:55:41 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:18:27.200 04:55:41 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:18:27.200 04:55:41 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:18:27.459 [2024-05-15 04:55:41.635695] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:27.459 04:55:41 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:18:27.718 [2024-05-15 04:55:41.775495] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:18:27.718 [2024-05-15 04:55:41.776894] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:18:27.718 04:55:41 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:18:27.718 [2024-05-15 04:55:41.915325] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:18:27.718 [2024-05-15 04:55:41.916534] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:18:27.718 04:55:41 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:18:27.718 04:55:41 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 57181 0 00:18:27.718 04:55:41 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 57181 0 busy 00:18:27.718 04:55:41 -- interrupt/interrupt_common.sh@33 -- # local pid=57181 00:18:27.718 04:55:41 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:18:27.718 04:55:41 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:18:27.718 04:55:41 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:18:27.718 04:55:41 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:27.718 04:55:41 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:27.718 04:55:41 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:27.718 04:55:41 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:18:27.718 04:55:41 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57181 -w 256 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57181 root 20 0 20.1t 117856 11172 R 99.9 1.0 0:01.20 reactor_0' 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@48 -- # echo 57181 root 20 0 20.1t 117856 11172 R 99.9 1.0 0:01.20 reactor_0 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:27.977 04:55:42 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:18:27.977 04:55:42 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 57181 2 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 57181 2 busy 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@33 -- # local pid=57181 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57181 -w 256 00:18:27.977 04:55:42 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:18:28.236 04:55:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57186 root 20 0 20.1t 117856 11172 R 99.9 1.0 0:00.34 reactor_2' 00:18:28.236 04:55:42 -- interrupt/interrupt_common.sh@48 -- # echo 57186 root 20 0 20.1t 117856 11172 R 99.9 1.0 0:00.34 reactor_2 00:18:28.236 04:55:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:28.236 04:55:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:28.236 04:55:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:18:28.236 04:55:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:18:28.236 04:55:42 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:18:28.236 04:55:42 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:18:28.236 04:55:42 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:18:28.236 04:55:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:28.236 04:55:42 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:18:28.494 [2024-05-15 04:55:42.479427] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:18:28.494 [2024-05-15 04:55:42.480186] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:18:28.494 04:55:42 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:18:28.495 04:55:42 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 57181 2 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 57181 2 idle 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@33 -- # local pid=57181 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57181 -w 256 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57186 root 20 0 20.1t 117928 11172 S 0.0 1.0 0:00.56 reactor_2' 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@48 -- # echo 57186 root 20 0 20.1t 117928 11172 S 0.0 1.0 0:00.56 reactor_2 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:18:28.495 04:55:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:28.495 04:55:42 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:18:28.753 [2024-05-15 04:55:42.847379] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:18:28.753 [2024-05-15 04:55:42.847903] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:18:28.753 04:55:42 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:18:28.753 04:55:42 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:18:28.753 04:55:42 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:18:29.012 [2024-05-15 04:55:42.991630] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:29.012 04:55:43 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 57181 0 00:18:29.012 04:55:43 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 57181 0 idle 00:18:29.012 04:55:43 -- interrupt/interrupt_common.sh@33 -- # local pid=57181 00:18:29.012 04:55:43 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57181 -w 256 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57181 root 20 0 20.1t 118012 11172 S 0.0 1.0 0:01.97 reactor_0' 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@48 -- # echo 57181 root 20 0 20.1t 118012 11172 S 0.0 1.0 0:01.97 reactor_0 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:18:29.013 04:55:43 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:29.013 04:55:43 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:18:29.013 04:55:43 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:18:29.013 04:55:43 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:18:29.013 04:55:43 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 57181 00:18:29.013 04:55:43 -- common/autotest_common.sh@926 -- # '[' -z 57181 ']' 00:18:29.013 04:55:43 -- common/autotest_common.sh@930 -- # kill -0 57181 00:18:29.013 04:55:43 -- common/autotest_common.sh@931 -- # uname 00:18:29.013 04:55:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:29.013 04:55:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57181 00:18:29.013 killing process with pid 57181 00:18:29.013 04:55:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:29.013 04:55:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:29.013 04:55:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57181' 00:18:29.013 04:55:43 -- common/autotest_common.sh@945 -- # kill 57181 00:18:29.013 04:55:43 -- common/autotest_common.sh@950 -- # wait 57181 00:18:30.914 04:55:44 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:18:30.914 04:55:44 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:18:30.914 04:55:44 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:18:30.914 04:55:44 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.914 04:55:44 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:18:30.914 04:55:44 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=57337 00:18:30.914 04:55:44 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.914 04:55:44 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 57337 /var/tmp/spdk.sock 00:18:30.914 04:55:44 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:18:30.914 04:55:44 -- common/autotest_common.sh@819 -- # '[' -z 57337 ']' 00:18:30.914 04:55:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.914 04:55:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:30.914 04:55:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.914 04:55:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:30.914 04:55:44 -- common/autotest_common.sh@10 -- # set +x 00:18:30.914 [2024-05-15 04:55:45.053059] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:30.914 [2024-05-15 04:55:45.053225] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57337 ] 00:18:31.173 [2024-05-15 04:55:45.204310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:31.431 [2024-05-15 04:55:45.440647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.431 [2024-05-15 04:55:45.440782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.431 [2024-05-15 04:55:45.440781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.689 [2024-05-15 04:55:45.841105] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:32.258 04:55:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:32.258 04:55:46 -- common/autotest_common.sh@852 -- # return 0 00:18:32.258 04:55:46 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:18:32.258 04:55:46 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:32.825 Malloc0 00:18:32.825 Malloc1 00:18:32.825 Malloc2 00:18:32.825 04:55:46 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:18:32.825 04:55:46 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:18:32.825 04:55:46 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:18:32.825 04:55:46 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:18:32.825 5000+0 records in 00:18:32.825 5000+0 records out 00:18:32.825 10240000 bytes (10 MB) copied, 0.0308543 s, 332 MB/s 00:18:32.825 04:55:46 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:18:33.084 AIO0 00:18:33.084 04:55:47 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 57337 00:18:33.084 04:55:47 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 57337 00:18:33.084 04:55:47 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=57337 00:18:33.084 04:55:47 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:18:33.084 04:55:47 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:18:33.084 04:55:47 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:18:33.084 04:55:47 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:18:33.084 04:55:47 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:18:33.084 04:55:47 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:18:33.343 04:55:47 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:18:33.343 spdk_thread ids are 1 on reactor0. 00:18:33.343 04:55:47 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:18:33.343 04:55:47 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:18:33.343 04:55:47 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 57337 0 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 57337 0 idle 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@33 -- # local pid=57337 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57337 -w 256 00:18:33.343 04:55:47 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57337 root 20 0 20.1t 121656 11172 R 0.0 1.0 0:00.87 reactor_0' 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@48 -- # echo 57337 root 20 0 20.1t 121656 11172 R 0.0 1.0 0:00.87 reactor_0 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:33.603 04:55:47 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:18:33.603 04:55:47 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 57337 1 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 57337 1 idle 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@33 -- # local pid=57337 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57337 -w 256 00:18:33.603 04:55:47 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:18:33.861 04:55:47 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57341 root 20 0 20.1t 121656 11172 S 0.0 1.0 0:00.00 reactor_1' 00:18:33.861 04:55:47 -- interrupt/interrupt_common.sh@48 -- # echo 57341 root 20 0 20.1t 121656 11172 S 0.0 1.0 0:00.00 reactor_1 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:33.862 04:55:47 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:18:33.862 04:55:47 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 57337 2 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 57337 2 idle 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@33 -- # local pid=57337 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57337 -w 256 00:18:33.862 04:55:47 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:18:33.862 04:55:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57342 root 20 0 20.1t 122200 11172 S 0.0 1.0 0:00.00 reactor_2' 00:18:33.862 04:55:48 -- interrupt/interrupt_common.sh@48 -- # echo 57342 root 20 0 20.1t 122200 11172 S 0.0 1.0 0:00.00 reactor_2 00:18:33.862 04:55:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:33.862 04:55:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:33.862 04:55:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:18:33.862 04:55:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:18:33.862 04:55:48 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:18:33.862 04:55:48 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:18:33.862 04:55:48 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:18:33.862 04:55:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:33.862 04:55:48 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:18:33.862 04:55:48 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:18:34.120 [2024-05-15 04:55:48.249317] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:18:34.120 [2024-05-15 04:55:48.249597] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:18:34.120 [2024-05-15 04:55:48.249950] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:18:34.120 04:55:48 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:18:34.378 [2024-05-15 04:55:48.469305] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:18:34.378 [2024-05-15 04:55:48.469941] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:18:34.378 04:55:48 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:18:34.378 04:55:48 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 57337 0 00:18:34.378 04:55:48 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 57337 0 busy 00:18:34.378 04:55:48 -- interrupt/interrupt_common.sh@33 -- # local pid=57337 00:18:34.378 04:55:48 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:18:34.378 04:55:48 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:18:34.378 04:55:48 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:18:34.378 04:55:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:34.378 04:55:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:34.378 04:55:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:34.378 04:55:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57337 -w 256 00:18:34.378 04:55:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57337 root 20 0 20.1t 122280 11184 R 99.9 1.0 0:01.28 reactor_0' 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@48 -- # echo 57337 root 20 0 20.1t 122280 11184 R 99.9 1.0 0:01.28 reactor_0 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:34.636 04:55:48 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:18:34.636 04:55:48 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 57337 2 00:18:34.636 04:55:48 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 57337 2 busy 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@33 -- # local pid=57337 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57337 -w 256 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57342 root 20 0 20.1t 122280 11184 R 99.9 1.0 0:00.35 reactor_2' 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@48 -- # echo 57342 root 20 0 20.1t 122280 11184 R 99.9 1.0 0:00.35 reactor_2 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:18:34.637 04:55:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:34.637 04:55:48 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:18:34.895 [2024-05-15 04:55:49.041407] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:18:34.895 [2024-05-15 04:55:49.041538] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:18:34.895 04:55:49 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:18:34.895 04:55:49 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 57337 2 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 57337 2 idle 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@33 -- # local pid=57337 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:18:34.895 04:55:49 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57337 -w 256 00:18:35.154 04:55:49 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57342 root 20 0 20.1t 122280 11184 S 0.0 1.0 0:00.57 reactor_2' 00:18:35.154 04:55:49 -- interrupt/interrupt_common.sh@48 -- # echo 57342 root 20 0 20.1t 122280 11184 S 0.0 1.0 0:00.57 reactor_2 00:18:35.154 04:55:49 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:35.154 04:55:49 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:35.154 04:55:49 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:18:35.154 04:55:49 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:18:35.154 04:55:49 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:18:35.154 04:55:49 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:18:35.154 04:55:49 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:18:35.154 04:55:49 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:35.154 04:55:49 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:18:35.413 [2024-05-15 04:55:49.429502] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:18:35.413 [2024-05-15 04:55:49.429912] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:18:35.413 [2024-05-15 04:55:49.429960] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:18:35.413 04:55:49 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:18:35.413 04:55:49 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 57337 0 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 57337 0 idle 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@33 -- # local pid=57337 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@41 -- # hash top 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 57337 -w 256 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 57337 root 20 0 20.1t 122388 11184 S 0.0 1.0 0:02.05 reactor_0' 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@48 -- # echo 57337 root 20 0 20.1t 122388 11184 S 0.0 1.0 0:02.05 reactor_0 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:18:35.413 04:55:49 -- interrupt/interrupt_common.sh@56 -- # return 0 00:18:35.413 04:55:49 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:18:35.413 04:55:49 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:18:35.413 04:55:49 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:18:35.413 04:55:49 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 57337 00:18:35.413 04:55:49 -- common/autotest_common.sh@926 -- # '[' -z 57337 ']' 00:18:35.413 04:55:49 -- common/autotest_common.sh@930 -- # kill -0 57337 00:18:35.413 04:55:49 -- common/autotest_common.sh@931 -- # uname 00:18:35.413 04:55:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:35.413 04:55:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57337 00:18:35.672 killing process with pid 57337 00:18:35.672 04:55:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:35.672 04:55:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:35.672 04:55:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57337' 00:18:35.672 04:55:49 -- common/autotest_common.sh@945 -- # kill 57337 00:18:35.672 04:55:49 -- common/autotest_common.sh@950 -- # wait 57337 00:18:37.573 04:55:51 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:18:37.573 04:55:51 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:18:37.573 ************************************ 00:18:37.573 END TEST reactor_set_interrupt 00:18:37.573 ************************************ 00:18:37.573 00:18:37.573 real 0m12.519s 00:18:37.573 user 0m12.287s 00:18:37.573 sys 0m1.760s 00:18:37.573 04:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.573 04:55:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.573 04:55:51 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:18:37.573 04:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:37.573 04:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:37.573 04:55:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.573 ************************************ 00:18:37.573 START TEST reap_unregistered_poller 00:18:37.573 ************************************ 00:18:37.573 04:55:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:18:37.573 * Looking for test storage... 00:18:37.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:18:37.573 04:55:51 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:18:37.573 04:55:51 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:18:37.573 04:55:51 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:18:37.573 04:55:51 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:18:37.573 04:55:51 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:18:37.573 04:55:51 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:37.573 04:55:51 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:18:37.573 04:55:51 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:37.573 04:55:51 -- common/autotest_common.sh@34 -- # set -e 00:18:37.573 04:55:51 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:37.573 04:55:51 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:37.573 04:55:51 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:37.573 04:55:51 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:37.573 04:55:51 -- common/build_config.sh@1 -- # CONFIG_RDMA=y 00:18:37.573 04:55:51 -- common/build_config.sh@2 -- # CONFIG_UNIT_TESTS=y 00:18:37.573 04:55:51 -- common/build_config.sh@3 -- # CONFIG_GOLANG=n 00:18:37.573 04:55:51 -- common/build_config.sh@4 -- # CONFIG_FUSE=n 00:18:37.573 04:55:51 -- common/build_config.sh@5 -- # CONFIG_ISAL=n 00:18:37.573 04:55:51 -- common/build_config.sh@6 -- # CONFIG_VTUNE_DIR= 00:18:37.573 04:55:51 -- common/build_config.sh@7 -- # CONFIG_CUSTOMOCF=n 00:18:37.573 04:55:51 -- common/build_config.sh@8 -- # CONFIG_IPSEC_MB_DIR= 00:18:37.573 04:55:51 -- common/build_config.sh@9 -- # CONFIG_VBDEV_COMPRESS=n 00:18:37.573 04:55:51 -- common/build_config.sh@10 -- # CONFIG_OCF_PATH= 00:18:37.573 04:55:51 -- common/build_config.sh@11 -- # CONFIG_SHARED=n 00:18:37.573 04:55:51 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:18:37.573 04:55:51 -- common/build_config.sh@13 -- # CONFIG_TESTS=y 00:18:37.573 04:55:51 -- common/build_config.sh@14 -- # CONFIG_APPS=y 00:18:37.573 04:55:51 -- common/build_config.sh@15 -- # CONFIG_ISAL_CRYPTO=n 00:18:37.573 04:55:51 -- common/build_config.sh@16 -- # CONFIG_LIBDIR= 00:18:37.573 04:55:51 -- common/build_config.sh@17 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:37.573 04:55:51 -- common/build_config.sh@18 -- # CONFIG_DAOS_DIR= 00:18:37.573 04:55:51 -- common/build_config.sh@19 -- # CONFIG_ISCSI_INITIATOR=n 00:18:37.573 04:55:51 -- common/build_config.sh@20 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:37.573 04:55:51 -- common/build_config.sh@21 -- # CONFIG_ASAN=y 00:18:37.573 04:55:51 -- common/build_config.sh@22 -- # CONFIG_LTO=n 00:18:37.573 04:55:51 -- common/build_config.sh@23 -- # CONFIG_CET=n 00:18:37.573 04:55:51 -- common/build_config.sh@24 -- # CONFIG_FUZZER=n 00:18:37.573 04:55:51 -- common/build_config.sh@25 -- # CONFIG_USDT=n 00:18:37.573 04:55:51 -- common/build_config.sh@26 -- # CONFIG_VTUNE=n 00:18:37.573 04:55:51 -- common/build_config.sh@27 -- # CONFIG_VHOST=y 00:18:37.573 04:55:51 -- common/build_config.sh@28 -- # CONFIG_WPDK_DIR= 00:18:37.573 04:55:51 -- common/build_config.sh@29 -- # CONFIG_UBLK=n 00:18:37.573 04:55:51 -- common/build_config.sh@30 -- # CONFIG_URING=n 00:18:37.573 04:55:51 -- common/build_config.sh@31 -- # CONFIG_SMA=n 00:18:37.573 04:55:51 -- common/build_config.sh@32 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:37.573 04:55:51 -- common/build_config.sh@33 -- # CONFIG_IDXD_KERNEL=n 00:18:37.573 04:55:51 -- common/build_config.sh@34 -- # CONFIG_FC_PATH= 00:18:37.573 04:55:51 -- common/build_config.sh@35 -- # CONFIG_PREFIX=/usr/local 00:18:37.573 04:55:51 -- common/build_config.sh@36 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:18:37.573 04:55:51 -- common/build_config.sh@37 -- # CONFIG_XNVME=n 00:18:37.573 04:55:51 -- common/build_config.sh@38 -- # CONFIG_RDMA_PROV=verbs 00:18:37.573 04:55:51 -- common/build_config.sh@39 -- # CONFIG_RDMA_SET_TOS=y 00:18:37.573 04:55:51 -- common/build_config.sh@40 -- # CONFIG_FUZZER_LIB= 00:18:37.573 04:55:51 -- common/build_config.sh@41 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:37.573 04:55:51 -- common/build_config.sh@42 -- # CONFIG_ARCH=native 00:18:37.573 04:55:51 -- common/build_config.sh@43 -- # CONFIG_PGO_CAPTURE=n 00:18:37.573 04:55:51 -- common/build_config.sh@44 -- # CONFIG_DAOS=y 00:18:37.573 04:55:51 -- common/build_config.sh@45 -- # CONFIG_WERROR=y 00:18:37.573 04:55:51 -- common/build_config.sh@46 -- # CONFIG_DEBUG=y 00:18:37.573 04:55:51 -- common/build_config.sh@47 -- # CONFIG_AVAHI=n 00:18:37.573 04:55:51 -- common/build_config.sh@48 -- # CONFIG_CROSS_PREFIX= 00:18:37.573 04:55:51 -- common/build_config.sh@49 -- # CONFIG_PGO_USE=n 00:18:37.573 04:55:51 -- common/build_config.sh@50 -- # CONFIG_CRYPTO=n 00:18:37.573 04:55:51 -- common/build_config.sh@51 -- # CONFIG_HAVE_ARC4RANDOM=n 00:18:37.573 04:55:51 -- common/build_config.sh@52 -- # CONFIG_OPENSSL_PATH= 00:18:37.573 04:55:51 -- common/build_config.sh@53 -- # CONFIG_EXAMPLES=y 00:18:37.573 04:55:51 -- common/build_config.sh@54 -- # CONFIG_DPDK_INC_DIR= 00:18:37.573 04:55:51 -- common/build_config.sh@55 -- # CONFIG_MAX_LCORES= 00:18:37.573 04:55:51 -- common/build_config.sh@56 -- # CONFIG_VIRTIO=y 00:18:37.573 04:55:51 -- common/build_config.sh@57 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:37.573 04:55:51 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB=n 00:18:37.573 04:55:51 -- common/build_config.sh@59 -- # CONFIG_UBSAN=n 00:18:37.573 04:55:51 -- common/build_config.sh@60 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:37.573 04:55:51 -- common/build_config.sh@61 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:18:37.573 04:55:51 -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:18:37.573 04:55:51 -- common/build_config.sh@63 -- # CONFIG_URING_PATH= 00:18:37.573 04:55:51 -- common/build_config.sh@64 -- # CONFIG_NVME_CUSE=y 00:18:37.573 04:55:51 -- common/build_config.sh@65 -- # CONFIG_URING_ZNS=n 00:18:37.573 04:55:51 -- common/build_config.sh@66 -- # CONFIG_VFIO_USER=n 00:18:37.573 04:55:51 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:18:37.573 04:55:51 -- common/build_config.sh@68 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:18:37.573 04:55:51 -- common/build_config.sh@69 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:37.573 04:55:51 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:18:37.573 04:55:51 -- common/build_config.sh@71 -- # CONFIG_RAID5F=n 00:18:37.573 04:55:51 -- common/build_config.sh@72 -- # CONFIG_VFIO_USER_DIR= 00:18:37.573 04:55:51 -- common/build_config.sh@73 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:37.573 04:55:51 -- common/build_config.sh@74 -- # CONFIG_TSAN=n 00:18:37.573 04:55:51 -- common/build_config.sh@75 -- # CONFIG_IDXD=y 00:18:37.573 04:55:51 -- common/build_config.sh@76 -- # CONFIG_OCF=n 00:18:37.574 04:55:51 -- common/build_config.sh@77 -- # CONFIG_CRYPTO_MLX5=n 00:18:37.574 04:55:51 -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:18:37.574 04:55:51 -- common/build_config.sh@79 -- # CONFIG_COVERAGE=y 00:18:37.574 04:55:51 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:37.574 04:55:51 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:37.574 04:55:51 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:37.574 04:55:51 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:37.574 04:55:51 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:37.574 04:55:51 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:37.574 04:55:51 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:37.574 04:55:51 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:37.574 04:55:51 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:37.574 04:55:51 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:37.574 04:55:51 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:37.574 04:55:51 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:37.574 04:55:51 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:37.574 04:55:51 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:37.574 04:55:51 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:37.574 04:55:51 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:37.574 #define SPDK_CONFIG_H 00:18:37.574 #define SPDK_CONFIG_APPS 1 00:18:37.574 #define SPDK_CONFIG_ARCH native 00:18:37.574 #define SPDK_CONFIG_ASAN 1 00:18:37.574 #undef SPDK_CONFIG_AVAHI 00:18:37.574 #undef SPDK_CONFIG_CET 00:18:37.574 #define SPDK_CONFIG_COVERAGE 1 00:18:37.574 #define SPDK_CONFIG_CROSS_PREFIX 00:18:37.574 #undef SPDK_CONFIG_CRYPTO 00:18:37.574 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:37.574 #undef SPDK_CONFIG_CUSTOMOCF 00:18:37.574 #define SPDK_CONFIG_DAOS 1 00:18:37.574 #define SPDK_CONFIG_DAOS_DIR 00:18:37.574 #define SPDK_CONFIG_DEBUG 1 00:18:37.574 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:37.574 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:37.574 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:37.574 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:37.574 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:37.574 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:37.574 #define SPDK_CONFIG_EXAMPLES 1 00:18:37.574 #undef SPDK_CONFIG_FC 00:18:37.574 #define SPDK_CONFIG_FC_PATH 00:18:37.574 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:37.574 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:37.574 #undef SPDK_CONFIG_FUSE 00:18:37.574 #undef SPDK_CONFIG_FUZZER 00:18:37.574 #define SPDK_CONFIG_FUZZER_LIB 00:18:37.574 #undef SPDK_CONFIG_GOLANG 00:18:37.574 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:18:37.574 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:37.574 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:37.574 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:37.574 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:18:37.574 #define SPDK_CONFIG_IDXD 1 00:18:37.574 #undef SPDK_CONFIG_IDXD_KERNEL 00:18:37.574 #undef SPDK_CONFIG_IPSEC_MB 00:18:37.574 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:37.574 #undef SPDK_CONFIG_ISAL 00:18:37.574 #undef SPDK_CONFIG_ISAL_CRYPTO 00:18:37.574 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:18:37.574 #define SPDK_CONFIG_LIBDIR 00:18:37.574 #undef SPDK_CONFIG_LTO 00:18:37.574 #define SPDK_CONFIG_MAX_LCORES 00:18:37.574 #define SPDK_CONFIG_NVME_CUSE 1 00:18:37.574 #undef SPDK_CONFIG_OCF 00:18:37.574 #define SPDK_CONFIG_OCF_PATH 00:18:37.574 #define SPDK_CONFIG_OPENSSL_PATH 00:18:37.574 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:37.574 #undef SPDK_CONFIG_PGO_USE 00:18:37.574 #define SPDK_CONFIG_PREFIX /usr/local 00:18:37.574 #undef SPDK_CONFIG_RAID5F 00:18:37.574 #undef SPDK_CONFIG_RBD 00:18:37.574 #define SPDK_CONFIG_RDMA 1 00:18:37.574 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:37.574 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:37.574 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:18:37.574 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:37.574 #undef SPDK_CONFIG_SHARED 00:18:37.574 #undef SPDK_CONFIG_SMA 00:18:37.574 #define SPDK_CONFIG_TESTS 1 00:18:37.574 #undef SPDK_CONFIG_TSAN 00:18:37.574 #undef SPDK_CONFIG_UBLK 00:18:37.574 #undef SPDK_CONFIG_UBSAN 00:18:37.574 #define SPDK_CONFIG_UNIT_TESTS 1 00:18:37.574 #undef SPDK_CONFIG_URING 00:18:37.574 #define SPDK_CONFIG_URING_PATH 00:18:37.574 #undef SPDK_CONFIG_URING_ZNS 00:18:37.574 #undef SPDK_CONFIG_USDT 00:18:37.574 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:37.574 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:37.574 #undef SPDK_CONFIG_VFIO_USER 00:18:37.574 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:37.574 #define SPDK_CONFIG_VHOST 1 00:18:37.574 #define SPDK_CONFIG_VIRTIO 1 00:18:37.574 #undef SPDK_CONFIG_VTUNE 00:18:37.574 #define SPDK_CONFIG_VTUNE_DIR 00:18:37.574 #define SPDK_CONFIG_WERROR 1 00:18:37.574 #define SPDK_CONFIG_WPDK_DIR 00:18:37.574 #undef SPDK_CONFIG_XNVME 00:18:37.574 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:37.574 04:55:51 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:37.574 04:55:51 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:37.574 04:55:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.574 04:55:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.574 04:55:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.574 04:55:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:37.574 04:55:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:37.574 04:55:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:37.574 04:55:51 -- paths/export.sh@5 -- # export PATH 00:18:37.574 04:55:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:37.574 04:55:51 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:37.574 04:55:51 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:37.574 04:55:51 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:37.574 04:55:51 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:37.574 04:55:51 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:18:37.574 04:55:51 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:18:37.574 04:55:51 -- pm/common@16 -- # TEST_TAG=N/A 00:18:37.574 04:55:51 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:18:37.574 04:55:51 -- common/autotest_common.sh@52 -- # : 1 00:18:37.574 04:55:51 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:18:37.574 04:55:51 -- common/autotest_common.sh@56 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:37.574 04:55:51 -- common/autotest_common.sh@58 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:18:37.574 04:55:51 -- common/autotest_common.sh@60 -- # : 1 00:18:37.574 04:55:51 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:37.574 04:55:51 -- common/autotest_common.sh@62 -- # : 1 00:18:37.574 04:55:51 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:18:37.574 04:55:51 -- common/autotest_common.sh@64 -- # : 00:18:37.574 04:55:51 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:18:37.574 04:55:51 -- common/autotest_common.sh@66 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:18:37.574 04:55:51 -- common/autotest_common.sh@68 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:18:37.574 04:55:51 -- common/autotest_common.sh@70 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:18:37.574 04:55:51 -- common/autotest_common.sh@72 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:37.574 04:55:51 -- common/autotest_common.sh@74 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:18:37.574 04:55:51 -- common/autotest_common.sh@76 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:18:37.574 04:55:51 -- common/autotest_common.sh@78 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:18:37.574 04:55:51 -- common/autotest_common.sh@80 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:18:37.574 04:55:51 -- common/autotest_common.sh@82 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:18:37.574 04:55:51 -- common/autotest_common.sh@84 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:18:37.574 04:55:51 -- common/autotest_common.sh@86 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:18:37.574 04:55:51 -- common/autotest_common.sh@88 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:18:37.574 04:55:51 -- common/autotest_common.sh@90 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:37.574 04:55:51 -- common/autotest_common.sh@92 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:18:37.574 04:55:51 -- common/autotest_common.sh@94 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:18:37.574 04:55:51 -- common/autotest_common.sh@96 -- # : rdma 00:18:37.574 04:55:51 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:37.574 04:55:51 -- common/autotest_common.sh@98 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:18:37.574 04:55:51 -- common/autotest_common.sh@100 -- # : 0 00:18:37.574 04:55:51 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:18:37.575 04:55:51 -- common/autotest_common.sh@102 -- # : 1 00:18:37.575 04:55:51 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:18:37.575 04:55:51 -- common/autotest_common.sh@104 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:18:37.575 04:55:51 -- common/autotest_common.sh@106 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:18:37.575 04:55:51 -- common/autotest_common.sh@108 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:18:37.575 04:55:51 -- common/autotest_common.sh@110 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:18:37.575 04:55:51 -- common/autotest_common.sh@112 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:37.575 04:55:51 -- common/autotest_common.sh@114 -- # : 1 00:18:37.575 04:55:51 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:18:37.575 04:55:51 -- common/autotest_common.sh@116 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:18:37.575 04:55:51 -- common/autotest_common.sh@118 -- # : 00:18:37.575 04:55:51 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:37.575 04:55:51 -- common/autotest_common.sh@120 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:18:37.575 04:55:51 -- common/autotest_common.sh@122 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:18:37.575 04:55:51 -- common/autotest_common.sh@124 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:18:37.575 04:55:51 -- common/autotest_common.sh@126 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:18:37.575 04:55:51 -- common/autotest_common.sh@128 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:18:37.575 04:55:51 -- common/autotest_common.sh@130 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:18:37.575 04:55:51 -- common/autotest_common.sh@132 -- # : 00:18:37.575 04:55:51 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:18:37.575 04:55:51 -- common/autotest_common.sh@134 -- # : true 00:18:37.575 04:55:51 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:18:37.575 04:55:51 -- common/autotest_common.sh@136 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:18:37.575 04:55:51 -- common/autotest_common.sh@138 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:18:37.575 04:55:51 -- common/autotest_common.sh@140 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:18:37.575 04:55:51 -- common/autotest_common.sh@142 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:18:37.575 04:55:51 -- common/autotest_common.sh@144 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:18:37.575 04:55:51 -- common/autotest_common.sh@146 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:18:37.575 04:55:51 -- common/autotest_common.sh@148 -- # : 00:18:37.575 04:55:51 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:18:37.575 04:55:51 -- common/autotest_common.sh@150 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:18:37.575 04:55:51 -- common/autotest_common.sh@152 -- # : 1 00:18:37.575 04:55:51 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:18:37.575 04:55:51 -- common/autotest_common.sh@154 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:18:37.575 04:55:51 -- common/autotest_common.sh@156 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:18:37.575 04:55:51 -- common/autotest_common.sh@158 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:18:37.575 04:55:51 -- common/autotest_common.sh@160 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:18:37.575 04:55:51 -- common/autotest_common.sh@163 -- # : 00:18:37.575 04:55:51 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:18:37.575 04:55:51 -- common/autotest_common.sh@165 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:18:37.575 04:55:51 -- common/autotest_common.sh@167 -- # : 0 00:18:37.575 04:55:51 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:37.575 04:55:51 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:37.575 04:55:51 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:37.575 04:55:51 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:37.575 04:55:51 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:37.575 04:55:51 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:37.575 04:55:51 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:37.575 04:55:51 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:37.575 04:55:51 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:37.575 04:55:51 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:37.575 04:55:51 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:37.575 04:55:51 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:37.575 04:55:51 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:37.575 04:55:51 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:37.575 04:55:51 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:18:37.575 04:55:51 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:37.575 04:55:51 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:37.575 04:55:51 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:37.575 04:55:51 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:37.575 04:55:51 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:37.575 04:55:51 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:18:37.575 04:55:51 -- common/autotest_common.sh@196 -- # cat 00:18:37.575 04:55:51 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:18:37.575 04:55:51 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:37.575 04:55:51 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:37.575 04:55:51 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:37.575 04:55:51 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:37.575 04:55:51 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:18:37.575 04:55:51 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:18:37.575 04:55:51 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:37.575 04:55:51 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:37.575 04:55:51 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:37.575 04:55:51 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:37.575 04:55:51 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:18:37.575 04:55:51 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:18:37.575 04:55:51 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:18:37.575 04:55:51 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:18:37.575 04:55:51 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:37.575 04:55:51 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:37.575 04:55:51 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:37.575 04:55:51 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:37.575 04:55:51 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:18:37.575 04:55:51 -- common/autotest_common.sh@249 -- # export valgrind= 00:18:37.575 04:55:51 -- common/autotest_common.sh@249 -- # valgrind= 00:18:37.575 04:55:51 -- common/autotest_common.sh@255 -- # uname -s 00:18:37.575 04:55:51 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:18:37.575 04:55:51 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:18:37.575 04:55:51 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:18:37.575 04:55:51 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:18:37.575 04:55:51 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:18:37.575 04:55:51 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:18:37.575 04:55:51 -- common/autotest_common.sh@265 -- # MAKE=make 00:18:37.575 04:55:51 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:18:37.575 04:55:51 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:18:37.575 04:55:51 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:18:37.575 04:55:51 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:18:37.575 04:55:51 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:18:37.575 04:55:51 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:18:37.575 04:55:51 -- common/autotest_common.sh@309 -- # [[ -z 57532 ]] 00:18:37.575 04:55:51 -- common/autotest_common.sh@309 -- # kill -0 57532 00:18:37.575 04:55:51 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:18:37.575 04:55:51 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:18:37.575 04:55:51 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:18:37.575 04:55:51 -- common/autotest_common.sh@322 -- # local mount target_dir 00:18:37.575 04:55:51 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:18:37.575 04:55:51 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:18:37.575 04:55:51 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:18:37.575 04:55:51 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:18:37.576 04:55:51 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.22YwxY 00:18:37.576 04:55:51 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:37.576 04:55:51 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:18:37.576 04:55:51 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:18:37.576 04:55:51 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.22YwxY/tests/interrupt /tmp/spdk.22YwxY 00:18:37.576 04:55:51 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:18:37.576 04:55:51 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:37.576 04:55:51 -- common/autotest_common.sh@318 -- # df -T 00:18:37.576 04:55:51 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267633664 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267633664 00:18:37.576 04:55:51 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:18:37.576 04:55:51 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # avails["$mount"]=6295588864 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298181632 00:18:37.576 04:55:51 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:18:37.576 04:55:51 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # avails["$mount"]=6277234688 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298181632 00:18:37.576 04:55:51 -- common/autotest_common.sh@354 -- # uses["$mount"]=20946944 00:18:37.576 04:55:51 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # avails["$mount"]=6298181632 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298181632 00:18:37.576 04:55:51 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:18:37.576 04:55:51 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # fss["$mount"]=xfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # avails["$mount"]=14364258304 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # sizes["$mount"]=21463302144 00:18:37.576 04:55:51 -- common/autotest_common.sh@354 -- # uses["$mount"]=7099043840 00:18:37.576 04:55:51 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # avails["$mount"]=1259638784 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1259638784 00:18:37.576 04:55:51 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:18:37.576 04:55:51 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:18:37.576 04:55:51 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # avails["$mount"]=97242378240 00:18:37.576 04:55:51 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:18:37.576 04:55:51 -- common/autotest_common.sh@354 -- # uses["$mount"]=2460401664 00:18:37.576 04:55:51 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:18:37.576 04:55:51 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:18:37.576 * Looking for test storage... 00:18:37.576 04:55:51 -- common/autotest_common.sh@359 -- # local target_space new_size 00:18:37.576 04:55:51 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:18:37.576 04:55:51 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:18:37.576 04:55:51 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:37.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.576 04:55:51 -- common/autotest_common.sh@363 -- # mount=/ 00:18:37.576 04:55:51 -- common/autotest_common.sh@365 -- # target_space=14364258304 00:18:37.576 04:55:51 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:18:37.576 04:55:51 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:18:37.576 04:55:51 -- common/autotest_common.sh@371 -- # [[ xfs == tmpfs ]] 00:18:37.576 04:55:51 -- common/autotest_common.sh@371 -- # [[ xfs == ramfs ]] 00:18:37.576 04:55:51 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:18:37.576 04:55:51 -- common/autotest_common.sh@372 -- # new_size=9313636352 00:18:37.576 04:55:51 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:18:37.576 04:55:51 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:18:37.576 04:55:51 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:18:37.576 04:55:51 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:18:37.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:18:37.576 04:55:51 -- common/autotest_common.sh@380 -- # return 0 00:18:37.576 04:55:51 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:18:37.576 04:55:51 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:18:37.576 04:55:51 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:37.576 04:55:51 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:37.576 04:55:51 -- common/autotest_common.sh@1672 -- # true 00:18:37.576 04:55:51 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:18:37.576 04:55:51 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:18:37.576 04:55:51 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:18:37.576 04:55:51 -- common/autotest_common.sh@27 -- # exec 00:18:37.576 04:55:51 -- common/autotest_common.sh@29 -- # exec 00:18:37.576 04:55:51 -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:37.576 04:55:51 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:37.576 04:55:51 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:37.576 04:55:51 -- common/autotest_common.sh@18 -- # set -x 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:18:37.576 04:55:51 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:18:37.576 04:55:51 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:18:37.576 04:55:51 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=57579 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 57579 /var/tmp/spdk.sock 00:18:37.576 04:55:51 -- common/autotest_common.sh@819 -- # '[' -z 57579 ']' 00:18:37.576 04:55:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.576 04:55:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:37.576 04:55:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.576 04:55:51 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:18:37.576 04:55:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:37.576 04:55:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.576 [2024-05-15 04:55:51.721186] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:37.576 [2024-05-15 04:55:51.721383] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57579 ] 00:18:37.835 [2024-05-15 04:55:51.875809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:38.094 [2024-05-15 04:55:52.109508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.094 [2024-05-15 04:55:52.109666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.094 [2024-05-15 04:55:52.109666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.352 [2024-05-15 04:55:52.505512] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:38.352 04:55:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:38.352 04:55:52 -- common/autotest_common.sh@852 -- # return 0 00:18:38.352 04:55:52 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:18:38.352 04:55:52 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:18:38.352 04:55:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.352 04:55:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.352 04:55:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.610 04:55:52 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:18:38.610 "name": "app_thread", 00:18:38.610 "id": 1, 00:18:38.610 "active_pollers": [], 00:18:38.610 "timed_pollers": [ 00:18:38.610 { 00:18:38.610 "name": "rpc_subsystem_poll", 00:18:38.610 "id": 1, 00:18:38.610 "state": "waiting", 00:18:38.610 "run_count": 0, 00:18:38.610 "busy_count": 0, 00:18:38.610 "period_ticks": 8400000 00:18:38.610 } 00:18:38.610 ], 00:18:38.610 "paused_pollers": [] 00:18:38.610 }' 00:18:38.610 04:55:52 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:18:38.610 04:55:52 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:18:38.610 04:55:52 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:18:38.610 04:55:52 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:18:38.610 04:55:52 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:18:38.610 04:55:52 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:18:38.610 04:55:52 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:18:38.610 04:55:52 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:18:38.610 04:55:52 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:18:38.610 5000+0 records in 00:18:38.610 5000+0 records out 00:18:38.610 10240000 bytes (10 MB) copied, 0.026783 s, 382 MB/s 00:18:38.610 04:55:52 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:18:38.868 AIO0 00:18:38.868 04:55:52 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:39.124 04:55:53 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:18:39.124 04:55:53 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:18:39.124 04:55:53 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:18:39.124 04:55:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.124 04:55:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.124 04:55:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.124 04:55:53 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:18:39.124 "name": "app_thread", 00:18:39.124 "id": 1, 00:18:39.124 "active_pollers": [], 00:18:39.124 "timed_pollers": [ 00:18:39.124 { 00:18:39.124 "name": "rpc_subsystem_poll", 00:18:39.124 "id": 1, 00:18:39.124 "state": "waiting", 00:18:39.124 "run_count": 0, 00:18:39.124 "busy_count": 0, 00:18:39.124 "period_ticks": 8400000 00:18:39.124 } 00:18:39.124 ], 00:18:39.124 "paused_pollers": [] 00:18:39.124 }' 00:18:39.124 04:55:53 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:18:39.124 04:55:53 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:18:39.124 04:55:53 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:18:39.124 04:55:53 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:18:39.381 04:55:53 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:18:39.381 04:55:53 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:18:39.381 04:55:53 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:39.381 04:55:53 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 57579 00:18:39.381 04:55:53 -- common/autotest_common.sh@926 -- # '[' -z 57579 ']' 00:18:39.381 04:55:53 -- common/autotest_common.sh@930 -- # kill -0 57579 00:18:39.381 04:55:53 -- common/autotest_common.sh@931 -- # uname 00:18:39.381 04:55:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:39.381 04:55:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57579 00:18:39.381 killing process with pid 57579 00:18:39.381 04:55:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:39.381 04:55:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:39.381 04:55:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57579' 00:18:39.381 04:55:53 -- common/autotest_common.sh@945 -- # kill 57579 00:18:39.381 04:55:53 -- common/autotest_common.sh@950 -- # wait 57579 00:18:40.778 04:55:54 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:18:40.778 04:55:54 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:18:40.778 ************************************ 00:18:40.778 END TEST reap_unregistered_poller 00:18:40.778 ************************************ 00:18:40.778 00:18:40.778 real 0m3.331s 00:18:40.778 user 0m2.885s 00:18:40.778 sys 0m0.592s 00:18:40.778 04:55:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:40.778 04:55:54 -- common/autotest_common.sh@10 -- # set +x 00:18:40.778 04:55:54 -- spdk/autotest.sh@204 -- # uname -s 00:18:40.778 04:55:54 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:18:40.778 04:55:54 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:18:40.778 04:55:54 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:18:40.778 04:55:54 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:18:40.778 04:55:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:40.778 04:55:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:40.778 04:55:54 -- common/autotest_common.sh@10 -- # set +x 00:18:40.778 ************************************ 00:18:40.778 START TEST spdk_dd 00:18:40.778 ************************************ 00:18:40.778 04:55:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:18:40.778 * Looking for test storage... 00:18:40.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:40.778 04:55:54 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.778 04:55:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.778 04:55:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.778 04:55:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.778 04:55:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:40.778 04:55:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:40.778 04:55:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:40.778 04:55:54 -- paths/export.sh@5 -- # export PATH 00:18:40.778 04:55:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:40.778 04:55:54 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:41.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:18:41.037 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:41.037 04:55:55 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:18:41.037 04:55:55 -- dd/dd.sh@11 -- # nvme_in_userspace 00:18:41.037 04:55:55 -- scripts/common.sh@311 -- # local bdf bdfs 00:18:41.037 04:55:55 -- scripts/common.sh@312 -- # local nvmes 00:18:41.037 04:55:55 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:18:41.037 04:55:55 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:18:41.037 04:55:55 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:18:41.037 04:55:55 -- scripts/common.sh@297 -- # local bdf= 00:18:41.037 04:55:55 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:18:41.037 04:55:55 -- scripts/common.sh@232 -- # local class 00:18:41.037 04:55:55 -- scripts/common.sh@233 -- # local subclass 00:18:41.037 04:55:55 -- scripts/common.sh@234 -- # local progif 00:18:41.037 04:55:55 -- scripts/common.sh@235 -- # printf %02x 1 00:18:41.037 04:55:55 -- scripts/common.sh@235 -- # class=01 00:18:41.037 04:55:55 -- scripts/common.sh@236 -- # printf %02x 8 00:18:41.037 04:55:55 -- scripts/common.sh@236 -- # subclass=08 00:18:41.037 04:55:55 -- scripts/common.sh@237 -- # printf %02x 2 00:18:41.037 04:55:55 -- scripts/common.sh@237 -- # progif=02 00:18:41.037 04:55:55 -- scripts/common.sh@239 -- # hash lspci 00:18:41.037 04:55:55 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:18:41.037 04:55:55 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:18:41.037 04:55:55 -- scripts/common.sh@242 -- # grep -i -- -p02 00:18:41.037 04:55:55 -- scripts/common.sh@244 -- # tr -d '"' 00:18:41.037 04:55:55 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:18:41.037 04:55:55 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:41.037 04:55:55 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:18:41.037 04:55:55 -- scripts/common.sh@15 -- # local i 00:18:41.037 04:55:55 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:18:41.037 04:55:55 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:41.037 04:55:55 -- scripts/common.sh@24 -- # return 0 00:18:41.037 04:55:55 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:18:41.037 04:55:55 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:18:41.037 04:55:55 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:18:41.037 04:55:55 -- scripts/common.sh@322 -- # uname -s 00:18:41.037 04:55:55 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:18:41.037 04:55:55 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:18:41.037 04:55:55 -- scripts/common.sh@327 -- # (( 1 )) 00:18:41.037 04:55:55 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:18:41.037 04:55:55 -- dd/dd.sh@13 -- # check_liburing 00:18:41.037 04:55:55 -- dd/common.sh@139 -- # local lib so 00:18:41.037 04:55:55 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:18:41.037 04:55:55 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libdaos.so.2 == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.037 04:55:55 -- dd/common.sh@143 -- # [[ libdaos_common.so == liburing.so.* ]] 00:18:41.037 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libdfs.so == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libgurt.so.4 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libz.so.1 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libisal.so.2 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libcart.so.4 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ liblz4.so.1 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libprotobuf-c.so.1 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libyaml-0.so.2 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libmercury_hl.so.2 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libmercury.so.2 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libmercury_util.so.2 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libna.so.2 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libfabric.so.1 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/common.sh@143 -- # [[ libpsm2.so.2 == liburing.so.* ]] 00:18:41.038 04:55:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:41.038 04:55:55 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:18:41.038 04:55:55 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:18:41.038 04:55:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:41.038 04:55:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:41.038 04:55:55 -- common/autotest_common.sh@10 -- # set +x 00:18:41.038 ************************************ 00:18:41.038 START TEST spdk_dd_basic_rw 00:18:41.038 ************************************ 00:18:41.038 04:55:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:18:41.038 * Looking for test storage... 00:18:41.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:41.038 04:55:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.038 04:55:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.038 04:55:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.038 04:55:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.038 04:55:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:41.038 04:55:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:41.038 04:55:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:41.038 04:55:55 -- paths/export.sh@5 -- # export PATH 00:18:41.038 04:55:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:18:41.038 04:55:55 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:18:41.038 04:55:55 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:18:41.038 04:55:55 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:18:41.038 04:55:55 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:18:41.038 04:55:55 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:18:41.038 04:55:55 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:18:41.038 04:55:55 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:18:41.038 04:55:55 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:41.038 04:55:55 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:41.296 04:55:55 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:18:41.296 04:55:55 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:18:41.296 04:55:55 -- dd/common.sh@126 -- # mapfile -t id 00:18:41.296 04:55:55 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:18:41.557 04:55:55 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 90 Data Units Written: 204 Host Read Commands: 1715 Host Write Commands: 308 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:18:41.557 04:55:55 -- dd/common.sh@130 -- # lbaf=04 00:18:41.557 04:55:55 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 90 Data Units Written: 204 Host Read Commands: 1715 Host Write Commands: 308 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:18:41.557 04:55:55 -- dd/common.sh@132 -- # lbaf=4096 00:18:41.557 04:55:55 -- dd/common.sh@134 -- # echo 4096 00:18:41.557 04:55:55 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:18:41.557 04:55:55 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:41.557 04:55:55 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:18:41.557 04:55:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:41.557 04:55:55 -- common/autotest_common.sh@10 -- # set +x 00:18:41.557 04:55:55 -- dd/basic_rw.sh@96 -- # : 00:18:41.557 04:55:55 -- dd/basic_rw.sh@96 -- # gen_conf 00:18:41.557 04:55:55 -- dd/common.sh@31 -- # xtrace_disable 00:18:41.557 04:55:55 -- common/autotest_common.sh@10 -- # set +x 00:18:41.557 ************************************ 00:18:41.557 START TEST dd_bs_lt_native_bs 00:18:41.557 ************************************ 00:18:41.557 04:55:55 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:41.557 04:55:55 -- common/autotest_common.sh@640 -- # local es=0 00:18:41.557 04:55:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:41.557 04:55:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:41.557 04:55:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:41.557 04:55:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:41.557 04:55:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:41.557 04:55:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:41.557 04:55:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:41.557 04:55:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:41.557 04:55:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:41.557 04:55:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:41.557 { 00:18:41.557 "subsystems": [ 00:18:41.557 { 00:18:41.557 "subsystem": "bdev", 00:18:41.557 "config": [ 00:18:41.557 { 00:18:41.557 "params": { 00:18:41.557 "trtype": "pcie", 00:18:41.557 "name": "Nvme0", 00:18:41.557 "traddr": "0000:00:06.0" 00:18:41.557 }, 00:18:41.557 "method": "bdev_nvme_attach_controller" 00:18:41.557 }, 00:18:41.557 { 00:18:41.557 "method": "bdev_wait_for_examine" 00:18:41.557 } 00:18:41.557 ] 00:18:41.557 } 00:18:41.558 ] 00:18:41.558 } 00:18:41.558 [2024-05-15 04:55:55.756207] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:41.558 [2024-05-15 04:55:55.756344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57852 ] 00:18:41.815 [2024-05-15 04:55:55.910762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.072 [2024-05-15 04:55:56.134430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.636 [2024-05-15 04:55:56.648705] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:18:42.636 [2024-05-15 04:55:56.648832] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:43.570 [2024-05-15 04:55:57.555541] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:18:43.828 04:55:57 -- common/autotest_common.sh@643 -- # es=234 00:18:43.828 04:55:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:43.828 04:55:57 -- common/autotest_common.sh@652 -- # es=106 00:18:43.828 04:55:57 -- common/autotest_common.sh@653 -- # case "$es" in 00:18:43.828 04:55:57 -- common/autotest_common.sh@660 -- # es=1 00:18:43.828 04:55:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:43.828 ************************************ 00:18:43.828 END TEST dd_bs_lt_native_bs 00:18:43.828 ************************************ 00:18:43.828 00:18:43.828 real 0m2.385s 00:18:43.828 user 0m1.904s 00:18:43.828 sys 0m0.344s 00:18:43.828 04:55:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:43.828 04:55:57 -- common/autotest_common.sh@10 -- # set +x 00:18:43.828 04:55:58 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:18:43.828 04:55:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:43.828 04:55:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:43.828 04:55:58 -- common/autotest_common.sh@10 -- # set +x 00:18:43.828 ************************************ 00:18:43.828 START TEST dd_rw 00:18:43.828 ************************************ 00:18:43.828 04:55:58 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:18:43.828 04:55:58 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:18:43.828 04:55:58 -- dd/basic_rw.sh@12 -- # local count size 00:18:43.828 04:55:58 -- dd/basic_rw.sh@13 -- # local qds bss 00:18:43.828 04:55:58 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:18:43.828 04:55:58 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:43.828 04:55:58 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:43.828 04:55:58 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:43.828 04:55:58 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:43.828 04:55:58 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:43.828 04:55:58 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:43.828 04:55:58 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:18:43.828 04:55:58 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:43.828 04:55:58 -- dd/basic_rw.sh@23 -- # count=15 00:18:43.828 04:55:58 -- dd/basic_rw.sh@24 -- # count=15 00:18:43.828 04:55:58 -- dd/basic_rw.sh@25 -- # size=61440 00:18:43.828 04:55:58 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:18:43.828 04:55:58 -- dd/common.sh@98 -- # xtrace_disable 00:18:43.828 04:55:58 -- common/autotest_common.sh@10 -- # set +x 00:18:44.762 04:55:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:18:44.762 04:55:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:18:44.762 04:55:58 -- dd/common.sh@31 -- # xtrace_disable 00:18:44.762 04:55:58 -- common/autotest_common.sh@10 -- # set +x 00:18:44.762 { 00:18:44.762 "subsystems": [ 00:18:44.762 { 00:18:44.762 "subsystem": "bdev", 00:18:44.762 "config": [ 00:18:44.762 { 00:18:44.762 "params": { 00:18:44.762 "trtype": "pcie", 00:18:44.762 "name": "Nvme0", 00:18:44.762 "traddr": "0000:00:06.0" 00:18:44.762 }, 00:18:44.762 "method": "bdev_nvme_attach_controller" 00:18:44.762 }, 00:18:44.762 { 00:18:44.762 "method": "bdev_wait_for_examine" 00:18:44.762 } 00:18:44.762 ] 00:18:44.762 } 00:18:44.762 ] 00:18:44.762 } 00:18:44.762 [2024-05-15 04:55:58.806006] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:44.762 [2024-05-15 04:55:58.806159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57910 ] 00:18:44.762 [2024-05-15 04:55:58.958050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.021 [2024-05-15 04:55:59.188351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.962  Copying: 60/60 [kB] (average 19 MBps) 00:18:46.962 00:18:46.962 04:56:01 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:18:46.962 04:56:01 -- dd/basic_rw.sh@37 -- # gen_conf 00:18:46.962 04:56:01 -- dd/common.sh@31 -- # xtrace_disable 00:18:46.962 04:56:01 -- common/autotest_common.sh@10 -- # set +x 00:18:46.962 { 00:18:46.962 "subsystems": [ 00:18:46.962 { 00:18:46.962 "subsystem": "bdev", 00:18:46.962 "config": [ 00:18:46.962 { 00:18:46.962 "params": { 00:18:46.962 "trtype": "pcie", 00:18:46.962 "name": "Nvme0", 00:18:46.962 "traddr": "0000:00:06.0" 00:18:46.962 }, 00:18:46.962 "method": "bdev_nvme_attach_controller" 00:18:46.962 }, 00:18:46.962 { 00:18:46.962 "method": "bdev_wait_for_examine" 00:18:46.962 } 00:18:46.962 ] 00:18:46.962 } 00:18:46.962 ] 00:18:46.962 } 00:18:46.962 [2024-05-15 04:56:01.173849] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:46.962 [2024-05-15 04:56:01.173998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57954 ] 00:18:47.221 [2024-05-15 04:56:01.327033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.478 [2024-05-15 04:56:01.561903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.416  Copying: 60/60 [kB] (average 19 MBps) 00:18:49.416 00:18:49.416 04:56:03 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:49.416 04:56:03 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:18:49.416 04:56:03 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:49.416 04:56:03 -- dd/common.sh@11 -- # local nvme_ref= 00:18:49.416 04:56:03 -- dd/common.sh@12 -- # local size=61440 00:18:49.416 04:56:03 -- dd/common.sh@14 -- # local bs=1048576 00:18:49.416 04:56:03 -- dd/common.sh@15 -- # local count=1 00:18:49.416 04:56:03 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:49.416 04:56:03 -- dd/common.sh@18 -- # gen_conf 00:18:49.416 04:56:03 -- dd/common.sh@31 -- # xtrace_disable 00:18:49.416 04:56:03 -- common/autotest_common.sh@10 -- # set +x 00:18:49.416 { 00:18:49.416 "subsystems": [ 00:18:49.416 { 00:18:49.416 "subsystem": "bdev", 00:18:49.416 "config": [ 00:18:49.416 { 00:18:49.416 "params": { 00:18:49.416 "trtype": "pcie", 00:18:49.416 "name": "Nvme0", 00:18:49.416 "traddr": "0000:00:06.0" 00:18:49.416 }, 00:18:49.416 "method": "bdev_nvme_attach_controller" 00:18:49.416 }, 00:18:49.416 { 00:18:49.416 "method": "bdev_wait_for_examine" 00:18:49.416 } 00:18:49.416 ] 00:18:49.416 } 00:18:49.416 ] 00:18:49.416 } 00:18:49.416 [2024-05-15 04:56:03.560796] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:49.416 [2024-05-15 04:56:03.560952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57989 ] 00:18:49.674 [2024-05-15 04:56:03.718100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.933 [2024-05-15 04:56:03.962274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.876  Copying: 1024/1024 [kB] (average 333 MBps) 00:18:51.876 00:18:51.876 04:56:05 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:51.876 04:56:05 -- dd/basic_rw.sh@23 -- # count=15 00:18:51.876 04:56:05 -- dd/basic_rw.sh@24 -- # count=15 00:18:51.876 04:56:05 -- dd/basic_rw.sh@25 -- # size=61440 00:18:51.876 04:56:05 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:18:51.876 04:56:05 -- dd/common.sh@98 -- # xtrace_disable 00:18:51.876 04:56:05 -- common/autotest_common.sh@10 -- # set +x 00:18:52.135 04:56:06 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:18:52.135 04:56:06 -- dd/basic_rw.sh@30 -- # gen_conf 00:18:52.135 04:56:06 -- dd/common.sh@31 -- # xtrace_disable 00:18:52.135 04:56:06 -- common/autotest_common.sh@10 -- # set +x 00:18:52.392 { 00:18:52.392 "subsystems": [ 00:18:52.392 { 00:18:52.392 "subsystem": "bdev", 00:18:52.392 "config": [ 00:18:52.392 { 00:18:52.392 "params": { 00:18:52.392 "trtype": "pcie", 00:18:52.392 "name": "Nvme0", 00:18:52.392 "traddr": "0000:00:06.0" 00:18:52.392 }, 00:18:52.392 "method": "bdev_nvme_attach_controller" 00:18:52.392 }, 00:18:52.392 { 00:18:52.392 "method": "bdev_wait_for_examine" 00:18:52.392 } 00:18:52.392 ] 00:18:52.392 } 00:18:52.392 ] 00:18:52.392 } 00:18:52.392 [2024-05-15 04:56:06.463887] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:52.392 [2024-05-15 04:56:06.464034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58033 ] 00:18:52.650 [2024-05-15 04:56:06.628373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.650 [2024-05-15 04:56:06.857130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.593  Copying: 60/60 [kB] (average 58 MBps) 00:18:54.593 00:18:54.593 04:56:08 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:18:54.593 04:56:08 -- dd/basic_rw.sh@37 -- # gen_conf 00:18:54.593 04:56:08 -- dd/common.sh@31 -- # xtrace_disable 00:18:54.593 04:56:08 -- common/autotest_common.sh@10 -- # set +x 00:18:54.593 { 00:18:54.593 "subsystems": [ 00:18:54.593 { 00:18:54.593 "subsystem": "bdev", 00:18:54.593 "config": [ 00:18:54.593 { 00:18:54.593 "params": { 00:18:54.593 "trtype": "pcie", 00:18:54.593 "name": "Nvme0", 00:18:54.593 "traddr": "0000:00:06.0" 00:18:54.593 }, 00:18:54.593 "method": "bdev_nvme_attach_controller" 00:18:54.593 }, 00:18:54.593 { 00:18:54.593 "method": "bdev_wait_for_examine" 00:18:54.593 } 00:18:54.593 ] 00:18:54.593 } 00:18:54.593 ] 00:18:54.593 } 00:18:54.851 [2024-05-15 04:56:08.845541] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:54.851 [2024-05-15 04:56:08.845696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58064 ] 00:18:54.851 [2024-05-15 04:56:08.997203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.110 [2024-05-15 04:56:09.234857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.086  Copying: 60/60 [kB] (average 58 MBps) 00:18:57.086 00:18:57.086 04:56:11 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:57.086 04:56:11 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:18:57.086 04:56:11 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:57.086 04:56:11 -- dd/common.sh@11 -- # local nvme_ref= 00:18:57.086 04:56:11 -- dd/common.sh@12 -- # local size=61440 00:18:57.086 04:56:11 -- dd/common.sh@14 -- # local bs=1048576 00:18:57.086 04:56:11 -- dd/common.sh@15 -- # local count=1 00:18:57.086 04:56:11 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:57.086 04:56:11 -- dd/common.sh@18 -- # gen_conf 00:18:57.086 04:56:11 -- dd/common.sh@31 -- # xtrace_disable 00:18:57.086 04:56:11 -- common/autotest_common.sh@10 -- # set +x 00:18:57.086 { 00:18:57.086 "subsystems": [ 00:18:57.086 { 00:18:57.086 "subsystem": "bdev", 00:18:57.086 "config": [ 00:18:57.086 { 00:18:57.086 "params": { 00:18:57.086 "trtype": "pcie", 00:18:57.086 "name": "Nvme0", 00:18:57.086 "traddr": "0000:00:06.0" 00:18:57.086 }, 00:18:57.086 "method": "bdev_nvme_attach_controller" 00:18:57.086 }, 00:18:57.086 { 00:18:57.086 "method": "bdev_wait_for_examine" 00:18:57.086 } 00:18:57.086 ] 00:18:57.086 } 00:18:57.086 ] 00:18:57.086 } 00:18:57.086 [2024-05-15 04:56:11.213817] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:57.086 [2024-05-15 04:56:11.213969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58104 ] 00:18:57.345 [2024-05-15 04:56:11.374283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.603 [2024-05-15 04:56:11.602262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.556  Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:59.556 00:18:59.556 04:56:13 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:18:59.556 04:56:13 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:59.556 04:56:13 -- dd/basic_rw.sh@23 -- # count=7 00:18:59.556 04:56:13 -- dd/basic_rw.sh@24 -- # count=7 00:18:59.556 04:56:13 -- dd/basic_rw.sh@25 -- # size=57344 00:18:59.556 04:56:13 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:18:59.556 04:56:13 -- dd/common.sh@98 -- # xtrace_disable 00:18:59.556 04:56:13 -- common/autotest_common.sh@10 -- # set +x 00:18:59.813 04:56:13 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:18:59.813 04:56:13 -- dd/basic_rw.sh@30 -- # gen_conf 00:18:59.813 04:56:13 -- dd/common.sh@31 -- # xtrace_disable 00:18:59.813 04:56:13 -- common/autotest_common.sh@10 -- # set +x 00:18:59.813 { 00:18:59.813 "subsystems": [ 00:18:59.813 { 00:18:59.813 "subsystem": "bdev", 00:18:59.813 "config": [ 00:18:59.813 { 00:18:59.813 "params": { 00:18:59.813 "trtype": "pcie", 00:18:59.813 "name": "Nvme0", 00:18:59.813 "traddr": "0000:00:06.0" 00:18:59.813 }, 00:18:59.813 "method": "bdev_nvme_attach_controller" 00:18:59.813 }, 00:18:59.813 { 00:18:59.813 "method": "bdev_wait_for_examine" 00:18:59.813 } 00:18:59.813 ] 00:18:59.813 } 00:18:59.813 ] 00:18:59.813 } 00:19:00.070 [2024-05-15 04:56:14.076904] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:00.070 [2024-05-15 04:56:14.077064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58139 ] 00:19:00.070 [2024-05-15 04:56:14.230640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.328 [2024-05-15 04:56:14.452124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.269  Copying: 56/56 [kB] (average 27 MBps) 00:19:02.269 00:19:02.269 04:56:16 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:19:02.270 04:56:16 -- dd/basic_rw.sh@37 -- # gen_conf 00:19:02.270 04:56:16 -- dd/common.sh@31 -- # xtrace_disable 00:19:02.270 04:56:16 -- common/autotest_common.sh@10 -- # set +x 00:19:02.270 { 00:19:02.270 "subsystems": [ 00:19:02.270 { 00:19:02.270 "subsystem": "bdev", 00:19:02.270 "config": [ 00:19:02.270 { 00:19:02.270 "params": { 00:19:02.270 "trtype": "pcie", 00:19:02.270 "name": "Nvme0", 00:19:02.270 "traddr": "0000:00:06.0" 00:19:02.270 }, 00:19:02.270 "method": "bdev_nvme_attach_controller" 00:19:02.270 }, 00:19:02.270 { 00:19:02.270 "method": "bdev_wait_for_examine" 00:19:02.270 } 00:19:02.270 ] 00:19:02.270 } 00:19:02.270 ] 00:19:02.270 } 00:19:02.270 [2024-05-15 04:56:16.432417] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:02.270 [2024-05-15 04:56:16.432560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58178 ] 00:19:02.528 [2024-05-15 04:56:16.619082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.787 [2024-05-15 04:56:16.845511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.729  Copying: 56/56 [kB] (average 54 MBps) 00:19:04.729 00:19:04.729 04:56:18 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:04.729 04:56:18 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:19:04.729 04:56:18 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:04.729 04:56:18 -- dd/common.sh@11 -- # local nvme_ref= 00:19:04.729 04:56:18 -- dd/common.sh@12 -- # local size=57344 00:19:04.729 04:56:18 -- dd/common.sh@14 -- # local bs=1048576 00:19:04.729 04:56:18 -- dd/common.sh@15 -- # local count=1 00:19:04.729 04:56:18 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:04.729 04:56:18 -- dd/common.sh@18 -- # gen_conf 00:19:04.729 04:56:18 -- dd/common.sh@31 -- # xtrace_disable 00:19:04.729 04:56:18 -- common/autotest_common.sh@10 -- # set +x 00:19:04.729 { 00:19:04.729 "subsystems": [ 00:19:04.729 { 00:19:04.729 "subsystem": "bdev", 00:19:04.729 "config": [ 00:19:04.729 { 00:19:04.729 "params": { 00:19:04.729 "trtype": "pcie", 00:19:04.729 "name": "Nvme0", 00:19:04.729 "traddr": "0000:00:06.0" 00:19:04.729 }, 00:19:04.729 "method": "bdev_nvme_attach_controller" 00:19:04.729 }, 00:19:04.729 { 00:19:04.729 "method": "bdev_wait_for_examine" 00:19:04.729 } 00:19:04.729 ] 00:19:04.729 } 00:19:04.729 ] 00:19:04.729 } 00:19:04.729 [2024-05-15 04:56:18.816890] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:04.729 [2024-05-15 04:56:18.817052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58211 ] 00:19:04.988 [2024-05-15 04:56:18.965475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.988 [2024-05-15 04:56:19.191655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.930  Copying: 1024/1024 [kB] (average 500 MBps) 00:19:06.930 00:19:06.930 04:56:21 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:19:06.930 04:56:21 -- dd/basic_rw.sh@23 -- # count=7 00:19:06.930 04:56:21 -- dd/basic_rw.sh@24 -- # count=7 00:19:06.930 04:56:21 -- dd/basic_rw.sh@25 -- # size=57344 00:19:06.930 04:56:21 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:19:06.930 04:56:21 -- dd/common.sh@98 -- # xtrace_disable 00:19:06.930 04:56:21 -- common/autotest_common.sh@10 -- # set +x 00:19:07.498 04:56:21 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:19:07.498 04:56:21 -- dd/basic_rw.sh@30 -- # gen_conf 00:19:07.498 04:56:21 -- dd/common.sh@31 -- # xtrace_disable 00:19:07.498 04:56:21 -- common/autotest_common.sh@10 -- # set +x 00:19:07.498 { 00:19:07.498 "subsystems": [ 00:19:07.498 { 00:19:07.498 "subsystem": "bdev", 00:19:07.498 "config": [ 00:19:07.498 { 00:19:07.498 "params": { 00:19:07.498 "trtype": "pcie", 00:19:07.498 "name": "Nvme0", 00:19:07.498 "traddr": "0000:00:06.0" 00:19:07.498 }, 00:19:07.498 "method": "bdev_nvme_attach_controller" 00:19:07.498 }, 00:19:07.498 { 00:19:07.498 "method": "bdev_wait_for_examine" 00:19:07.498 } 00:19:07.498 ] 00:19:07.498 } 00:19:07.498 ] 00:19:07.498 } 00:19:07.498 [2024-05-15 04:56:21.613897] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:07.498 [2024-05-15 04:56:21.614054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58252 ] 00:19:07.756 [2024-05-15 04:56:21.776284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.015 [2024-05-15 04:56:22.019127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.988  Copying: 56/56 [kB] (average 54 MBps) 00:19:09.988 00:19:09.988 04:56:23 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:19:09.988 04:56:23 -- dd/basic_rw.sh@37 -- # gen_conf 00:19:09.988 04:56:23 -- dd/common.sh@31 -- # xtrace_disable 00:19:09.988 04:56:23 -- common/autotest_common.sh@10 -- # set +x 00:19:09.988 { 00:19:09.988 "subsystems": [ 00:19:09.988 { 00:19:09.988 "subsystem": "bdev", 00:19:09.988 "config": [ 00:19:09.988 { 00:19:09.988 "params": { 00:19:09.988 "trtype": "pcie", 00:19:09.988 "name": "Nvme0", 00:19:09.988 "traddr": "0000:00:06.0" 00:19:09.988 }, 00:19:09.988 "method": "bdev_nvme_attach_controller" 00:19:09.988 }, 00:19:09.988 { 00:19:09.988 "method": "bdev_wait_for_examine" 00:19:09.988 } 00:19:09.988 ] 00:19:09.988 } 00:19:09.988 ] 00:19:09.988 } 00:19:09.988 [2024-05-15 04:56:24.000366] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:09.988 [2024-05-15 04:56:24.000511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58286 ] 00:19:09.988 [2024-05-15 04:56:24.169023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.254 [2024-05-15 04:56:24.418915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.193  Copying: 56/56 [kB] (average 54 MBps) 00:19:12.193 00:19:12.193 04:56:26 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:12.193 04:56:26 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:19:12.193 04:56:26 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:12.193 04:56:26 -- dd/common.sh@11 -- # local nvme_ref= 00:19:12.193 04:56:26 -- dd/common.sh@12 -- # local size=57344 00:19:12.193 04:56:26 -- dd/common.sh@14 -- # local bs=1048576 00:19:12.193 04:56:26 -- dd/common.sh@15 -- # local count=1 00:19:12.193 04:56:26 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:12.193 04:56:26 -- dd/common.sh@18 -- # gen_conf 00:19:12.193 04:56:26 -- dd/common.sh@31 -- # xtrace_disable 00:19:12.193 04:56:26 -- common/autotest_common.sh@10 -- # set +x 00:19:12.193 { 00:19:12.193 "subsystems": [ 00:19:12.193 { 00:19:12.193 "subsystem": "bdev", 00:19:12.193 "config": [ 00:19:12.193 { 00:19:12.193 "params": { 00:19:12.193 "trtype": "pcie", 00:19:12.193 "name": "Nvme0", 00:19:12.193 "traddr": "0000:00:06.0" 00:19:12.193 }, 00:19:12.193 "method": "bdev_nvme_attach_controller" 00:19:12.193 }, 00:19:12.193 { 00:19:12.193 "method": "bdev_wait_for_examine" 00:19:12.193 } 00:19:12.193 ] 00:19:12.193 } 00:19:12.193 ] 00:19:12.193 } 00:19:12.193 [2024-05-15 04:56:26.401431] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:12.193 [2024-05-15 04:56:26.401582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58319 ] 00:19:12.452 [2024-05-15 04:56:26.555624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.711 [2024-05-15 04:56:26.795131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.654  Copying: 1024/1024 [kB] (average 500 MBps) 00:19:14.654 00:19:14.654 04:56:28 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:19:14.654 04:56:28 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:19:14.654 04:56:28 -- dd/basic_rw.sh@23 -- # count=3 00:19:14.654 04:56:28 -- dd/basic_rw.sh@24 -- # count=3 00:19:14.654 04:56:28 -- dd/basic_rw.sh@25 -- # size=49152 00:19:14.654 04:56:28 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:19:14.654 04:56:28 -- dd/common.sh@98 -- # xtrace_disable 00:19:14.654 04:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:14.912 04:56:29 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:19:14.912 04:56:29 -- dd/basic_rw.sh@30 -- # gen_conf 00:19:14.912 04:56:29 -- dd/common.sh@31 -- # xtrace_disable 00:19:14.912 04:56:29 -- common/autotest_common.sh@10 -- # set +x 00:19:14.912 { 00:19:14.912 "subsystems": [ 00:19:14.912 { 00:19:14.912 "subsystem": "bdev", 00:19:14.912 "config": [ 00:19:14.912 { 00:19:14.912 "params": { 00:19:14.912 "trtype": "pcie", 00:19:14.912 "name": "Nvme0", 00:19:14.912 "traddr": "0000:00:06.0" 00:19:14.912 }, 00:19:14.912 "method": "bdev_nvme_attach_controller" 00:19:14.912 }, 00:19:14.912 { 00:19:14.912 "method": "bdev_wait_for_examine" 00:19:14.912 } 00:19:14.912 ] 00:19:14.912 } 00:19:14.912 ] 00:19:14.912 } 00:19:15.171 [2024-05-15 04:56:29.193770] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:15.171 [2024-05-15 04:56:29.193918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58358 ] 00:19:15.171 [2024-05-15 04:56:29.345633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.429 [2024-05-15 04:56:29.569769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.389  Copying: 48/48 [kB] (average 46 MBps) 00:19:17.389 00:19:17.389 04:56:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:19:17.389 04:56:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:19:17.389 04:56:31 -- dd/common.sh@31 -- # xtrace_disable 00:19:17.389 04:56:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.389 { 00:19:17.389 "subsystems": [ 00:19:17.389 { 00:19:17.389 "subsystem": "bdev", 00:19:17.389 "config": [ 00:19:17.389 { 00:19:17.389 "params": { 00:19:17.389 "trtype": "pcie", 00:19:17.389 "name": "Nvme0", 00:19:17.389 "traddr": "0000:00:06.0" 00:19:17.389 }, 00:19:17.389 "method": "bdev_nvme_attach_controller" 00:19:17.389 }, 00:19:17.389 { 00:19:17.389 "method": "bdev_wait_for_examine" 00:19:17.389 } 00:19:17.389 ] 00:19:17.389 } 00:19:17.389 ] 00:19:17.389 } 00:19:17.389 [2024-05-15 04:56:31.521207] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:17.389 [2024-05-15 04:56:31.521367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58390 ] 00:19:17.648 [2024-05-15 04:56:31.689983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.907 [2024-05-15 04:56:31.910050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.860  Copying: 48/48 [kB] (average 46 MBps) 00:19:19.860 00:19:19.860 04:56:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:19.860 04:56:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:19:19.860 04:56:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:19.860 04:56:33 -- dd/common.sh@11 -- # local nvme_ref= 00:19:19.860 04:56:33 -- dd/common.sh@12 -- # local size=49152 00:19:19.860 04:56:33 -- dd/common.sh@14 -- # local bs=1048576 00:19:19.860 04:56:33 -- dd/common.sh@15 -- # local count=1 00:19:19.860 04:56:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:19.860 04:56:33 -- dd/common.sh@18 -- # gen_conf 00:19:19.860 04:56:33 -- dd/common.sh@31 -- # xtrace_disable 00:19:19.860 04:56:33 -- common/autotest_common.sh@10 -- # set +x 00:19:19.860 { 00:19:19.860 "subsystems": [ 00:19:19.860 { 00:19:19.860 "subsystem": "bdev", 00:19:19.860 "config": [ 00:19:19.860 { 00:19:19.860 "params": { 00:19:19.860 "trtype": "pcie", 00:19:19.860 "name": "Nvme0", 00:19:19.860 "traddr": "0000:00:06.0" 00:19:19.860 }, 00:19:19.860 "method": "bdev_nvme_attach_controller" 00:19:19.860 }, 00:19:19.860 { 00:19:19.860 "method": "bdev_wait_for_examine" 00:19:19.860 } 00:19:19.860 ] 00:19:19.860 } 00:19:19.860 ] 00:19:19.860 } 00:19:19.860 [2024-05-15 04:56:33.909999] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:19.860 [2024-05-15 04:56:33.910225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58433 ] 00:19:19.860 [2024-05-15 04:56:34.077368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.118 [2024-05-15 04:56:34.301032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.063  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:22.063 00:19:22.063 04:56:36 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:19:22.063 04:56:36 -- dd/basic_rw.sh@23 -- # count=3 00:19:22.063 04:56:36 -- dd/basic_rw.sh@24 -- # count=3 00:19:22.063 04:56:36 -- dd/basic_rw.sh@25 -- # size=49152 00:19:22.063 04:56:36 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:19:22.063 04:56:36 -- dd/common.sh@98 -- # xtrace_disable 00:19:22.063 04:56:36 -- common/autotest_common.sh@10 -- # set +x 00:19:22.630 04:56:36 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:19:22.630 04:56:36 -- dd/basic_rw.sh@30 -- # gen_conf 00:19:22.630 04:56:36 -- dd/common.sh@31 -- # xtrace_disable 00:19:22.630 04:56:36 -- common/autotest_common.sh@10 -- # set +x 00:19:22.630 { 00:19:22.630 "subsystems": [ 00:19:22.630 { 00:19:22.630 "subsystem": "bdev", 00:19:22.630 "config": [ 00:19:22.630 { 00:19:22.630 "params": { 00:19:22.630 "trtype": "pcie", 00:19:22.630 "name": "Nvme0", 00:19:22.630 "traddr": "0000:00:06.0" 00:19:22.630 }, 00:19:22.630 "method": "bdev_nvme_attach_controller" 00:19:22.630 }, 00:19:22.630 { 00:19:22.630 "method": "bdev_wait_for_examine" 00:19:22.630 } 00:19:22.630 ] 00:19:22.630 } 00:19:22.630 ] 00:19:22.630 } 00:19:22.630 [2024-05-15 04:56:36.743346] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:22.630 [2024-05-15 04:56:36.743492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58476 ] 00:19:22.889 [2024-05-15 04:56:36.892960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.889 [2024-05-15 04:56:37.108299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.834  Copying: 48/48 [kB] (average 46 MBps) 00:19:24.834 00:19:24.834 04:56:38 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:19:24.834 04:56:38 -- dd/basic_rw.sh@37 -- # gen_conf 00:19:24.834 04:56:38 -- dd/common.sh@31 -- # xtrace_disable 00:19:24.834 04:56:38 -- common/autotest_common.sh@10 -- # set +x 00:19:24.834 { 00:19:24.834 "subsystems": [ 00:19:24.834 { 00:19:24.834 "subsystem": "bdev", 00:19:24.834 "config": [ 00:19:24.834 { 00:19:24.834 "params": { 00:19:24.834 "trtype": "pcie", 00:19:24.834 "name": "Nvme0", 00:19:24.834 "traddr": "0000:00:06.0" 00:19:24.834 }, 00:19:24.834 "method": "bdev_nvme_attach_controller" 00:19:24.834 }, 00:19:24.834 { 00:19:24.834 "method": "bdev_wait_for_examine" 00:19:24.834 } 00:19:24.834 ] 00:19:24.834 } 00:19:24.834 ] 00:19:24.834 } 00:19:25.093 [2024-05-15 04:56:39.081206] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:25.093 [2024-05-15 04:56:39.081351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58508 ] 00:19:25.093 [2024-05-15 04:56:39.232538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.352 [2024-05-15 04:56:39.450405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.298  Copying: 48/48 [kB] (average 46 MBps) 00:19:27.298 00:19:27.298 04:56:41 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:27.298 04:56:41 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:19:27.298 04:56:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:27.298 04:56:41 -- dd/common.sh@11 -- # local nvme_ref= 00:19:27.298 04:56:41 -- dd/common.sh@12 -- # local size=49152 00:19:27.298 04:56:41 -- dd/common.sh@14 -- # local bs=1048576 00:19:27.298 04:56:41 -- dd/common.sh@15 -- # local count=1 00:19:27.299 04:56:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:27.299 04:56:41 -- dd/common.sh@18 -- # gen_conf 00:19:27.299 04:56:41 -- dd/common.sh@31 -- # xtrace_disable 00:19:27.299 04:56:41 -- common/autotest_common.sh@10 -- # set +x 00:19:27.299 { 00:19:27.299 "subsystems": [ 00:19:27.299 { 00:19:27.299 "subsystem": "bdev", 00:19:27.299 "config": [ 00:19:27.299 { 00:19:27.299 "params": { 00:19:27.299 "trtype": "pcie", 00:19:27.299 "name": "Nvme0", 00:19:27.299 "traddr": "0000:00:06.0" 00:19:27.299 }, 00:19:27.299 "method": "bdev_nvme_attach_controller" 00:19:27.299 }, 00:19:27.299 { 00:19:27.299 "method": "bdev_wait_for_examine" 00:19:27.299 } 00:19:27.299 ] 00:19:27.299 } 00:19:27.299 ] 00:19:27.299 } 00:19:27.299 [2024-05-15 04:56:41.451605] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:27.299 [2024-05-15 04:56:41.452134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58541 ] 00:19:27.570 [2024-05-15 04:56:41.642039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.829 [2024-05-15 04:56:41.868340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.771  Copying: 1024/1024 [kB] (average 500 MBps) 00:19:29.771 00:19:29.771 ************************************ 00:19:29.771 END TEST dd_rw 00:19:29.771 ************************************ 00:19:29.771 00:19:29.771 real 0m45.652s 00:19:29.771 user 0m36.815s 00:19:29.771 sys 0m6.385s 00:19:29.771 04:56:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.771 04:56:43 -- common/autotest_common.sh@10 -- # set +x 00:19:29.771 04:56:43 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:19:29.771 04:56:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:29.771 04:56:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:29.771 04:56:43 -- common/autotest_common.sh@10 -- # set +x 00:19:29.771 ************************************ 00:19:29.771 START TEST dd_rw_offset 00:19:29.771 ************************************ 00:19:29.771 04:56:43 -- common/autotest_common.sh@1104 -- # basic_offset 00:19:29.771 04:56:43 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:19:29.771 04:56:43 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:19:29.771 04:56:43 -- dd/common.sh@98 -- # xtrace_disable 00:19:29.771 04:56:43 -- common/autotest_common.sh@10 -- # set +x 00:19:29.771 04:56:43 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:19:29.771 04:56:43 -- dd/basic_rw.sh@56 -- # data=2qypmx9em2p8mlpsdktn47d90jvvaxpy4ccojsj87q4xx1pwoekjk78nxor79j5jyc3kepjwsu4mu5vsdxcqf42hlritiqnm0u4srv8yo1w7k248o0vneque0spfwrghksn4yvlridwzkog09axgy813c0faoqj19sh8i652wamuij91x5asxyokcds6ci1b09di4fvp0yr4qky848ekuyfdthhu3tbafgi7pepz37cj2r2dsj7bnnkgdpnr9ewkbrgvbyyxifincgzubhocnx6q3pr9krz1p5d7amjjl1yfemriwscafgazconyqmbh15sznx60jkj64o03zbarj6qkp6g18gp2yp1w8wcht38hiwxafonedqgf7ssc4b6y7rra0jte6bimpy7h10d8oi99ais4kaatbtpm893td7rvg4ox31x6t759fn2l7mnwb5iy2wf5q5qg404c6mcsatvt8dwk02oeupy6tc81mh7mecymhla2x0yy8qt1knf9ebqggsqywbsu3vkt46qumsaz7qvfsn85qay9zfx1krmxtif81v207qdpxrefqogs9uwvyiqh89tq6plfiq3nuf39o1w4frl0p0gkbjskq9xq1y49s63xf66w5gl0irh75iq6htdk6z2xd97zxaqnz09ioi8hg6uqx008rczgwhktfvjfpfaf178eddo70100r10b12m4ny9y7ba4asdp4deen8n7psnxddeq4xh4ayk35v8wlqs0lph4et2fsuz9g508nuj95pnwqzhnhjw608qcl0y54yqbhlfzaprl1eioix9culm0ohyqv2kmiz1kg85etdr6tqmgtwbeoc8zcomf6afyszadfbl7qffgwellxlkrtr6iib6jm7i2jqnu2ab9ia37nq11wps9lhepfeymj5mdwjfl34aq9k3q2d5j9wze5mvpdqc5kna556hnjeqogg1g8n2xhf07i50pj2qukwxwotyan31ey2qvl9xrc59h4c0rdzgtq29mcsyjcvnvqinejmv3kdoshz6q7fvhsm8q0es111nd5zlu0x0sov9rkumin0ok8ao5y4tffpvfqq2pimkiy0wtrl7on2822umtdt5g5z4pgaq115w3ge5cigoq2uwsxng0udd5becniiwirex1wuv65lrs9bpfson9w4loqj3n7wk8ym2g496vuf0fqfitooxonrmiht9ckcnbdd1tlkwbnig2yndxcey4yqj26igqqlyjxdi4yggy0q7yj1vp87xff6benucx3x2sfkq5ezqnf63wsyvxm9aoc279fg0ckoaj2peias67dnv26dnxz1c989b9j8s7a0hlyjcjofh9dnm0vyl8g13os725a0e53vm6z499ucfbtohjalvk91oiiq0q4b5d9xzrwz6y8h2sg4y327i8secfv24o03r9bg1kmjkt8el613rk230y30auj7iph29x4jzjfhv213kcn8wf3qoxsj337t2n8vzdv22x6tjo9zc2b9fh3yt1gxlktlzjpd8segssgil8x1d0yknj8nxwe6646b30g5pspx8bh4iijk8qs81fwegwh6390xjf7r5c41cuxxe223gsa7lq86op22emufmjldrl6ikfab79vfsfxh52xk8takyf5ywa3ga3oz5c1vrz4p3q7ddbeqfbyyafqfykb1vsoml2jyv2mab0080gg2xzncfb85rmewjjr6iyji66jxfsedk0pgxtwvtolwa06xkw57tonytp091gn8o5ftfct7es3vw7yb00ylpl16ge0ki0lngw3vp4oktsi4n0k80c2sbu7n8a3ynyqwy1qmkc83awusx3e9g4tixesp7nmygglm29waj9rege7m6ptpqphi7apnu9vvhmet990xe3xa9fc3g6mpwvwbs8e52oai1a9bn06sbnisv5l1sazsh1ybmsmukmlhonnz4287rl3hkvnlte34uisbhztuh1qd3hvbjowieq940580m6qa3nr710gh23gk9sq73rqrwes0iy4617hud12yyuztkcip5yb7msbtv80whmjgzqgeyd1bw952ow37eh6zmmuxtvy9nxnb42hyx1qn5odnh7wz1adxe0lbkeae3kseat2givq9vmi8ewn12kk9wiz8sqz3mevor9klh2idgi6499ixfl5f9vt0m035mri9uhyvnq3nl9umfcubgx5isapt9uy3son24xyqbp551zkh0fxx4y7tle5kgj609q1avq3czqnu3j4npd1szcu1zape3f5ypruz0o29q745xf2wa7jmidb8n6omauav1fdxtsegc32kue1qc0zrnvcephlzhbtv17y4mdtsjueqs73scm22niemf25mb01nh673a7sjv923esggty44xzr52harc6f6bjm1zms7xrjovnhb0kz7odrq1dvzzo6254dvsouxd6jl7xt9qgf4b9kcxqbv4nf13zpufj6qjkn7bfq295mu2untaf693sovs59xqqmyxxn6wwsdd2g3a8b8vfl8hp83a8konbfyv6ixtdregnj0jooh5nimhegscmye6spm9ri334h5t5ekca8wmwh0kwgscu2rki3lu2n897sts10zgsrj4xyb7gh4cmf84g08umemqnzcb3gm75mjo5qqxom07qygap7nz2npd70xsrzdlfrynaldw9au2ze0cs0vjp6yfacj39l5yes5edhpdpowrmljkdawou8k6a5ahbdlxld1rml834mbjhgf9xaq9f5935ggjrgvdqn77noen6fhzx3rpebwm02hi83r0yxqkueisuly3p6ivcn78xwpuu26ya9mtzckhvqk3iad31ohtge6tgbw5dwha80gfrsz6fsipv9jdlzj2sqiod2rg5lt7ykeplnfvdbwaqligctn5g8auamexhm8iwzvyvhsswukjwdbkapvy9327li4bfkolgil4oatb0orr9kgn2444vyrp0uvzitv3d9rtf5uh4doiit3o9m4ampcix96menuxdcq5ycufj854xj7kxva1cistzi06do6h4m1vyybc056dukltt7gb9j8l31rb28ijynaedsedb24abc2aduak7t8ht50rjnrck26algqel4gh9gsj8ch4zeyf6c3chm2ezhawygia4f3l5pjjnd3pyob8xj31e5y77pfbqpc6yne7qcl5bra82diwc6mhmg4e9zzowxrmm8ay6wfrzv9yxali0pfw0551o45vxno5od6kpkggco92fky9ac866g7uxa6nl75orbrfx52823zainu9y0mbr8ceafdvgb6b64aia1wbb29oqh9pk1pxyl9badvlt940ez0cvz7z231uqol0l7bbjhz7y951w9buudvgaprau4l7b2jjz0h448fhmq8h5bcxjxg9y66vbcatnebruzc9ta8kwarrpxx57jz87ruwn78ybgxi6nzc66pz50kzcfk7c56rmz6x6z6ouxg2gqyjxn3knubo27cd8scj0t9mah2v201a3ux9ri8maruqp46oudlk3g4zmqywg7sgt61xfo6jdb9sazlr0ewo9cbnbxoepaevmqfoly9n8sxbnkq77sbz1ykcn72t1zfwwrtzimdy0etugfsr8ve1pyhhtoszhlmiqras2v1vmxvx0ijt5nyua3d2p84tayd44riils70lewtqb7rw732be3izmq5ehfjhy6oft95120c1yuoue4epkhhlx4zxfeg97c9cwuj8j29m17td78or3rx9bba631f7s12mta8dozeta0xcrnoamj62sm2xq6bgbf10hqozkelbdeqipkbwdrqyj9db262gr26tw4v6h35mhs4xyla1n0hpvumars2hlzc3lsyoeselkfikmoi8sa9n6b7ra5nnurp025tqrbzkaybahoyf4jz5ewlp8h4aejuo5a3vs9mhtd9zj418gmrj1potscynp4zhqy5m5vjrg2qalful898f51anpb2devpkf2nfykhwoqo6cabm946qsbpn4fl7ou6vz6f2lqvow5y8i6pr3zt8w0pyqqxx 00:19:29.771 04:56:43 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:19:29.771 04:56:43 -- dd/basic_rw.sh@59 -- # gen_conf 00:19:29.771 04:56:43 -- dd/common.sh@31 -- # xtrace_disable 00:19:29.771 04:56:43 -- common/autotest_common.sh@10 -- # set +x 00:19:29.771 { 00:19:29.771 "subsystems": [ 00:19:29.771 { 00:19:29.771 "subsystem": "bdev", 00:19:29.771 "config": [ 00:19:29.771 { 00:19:29.771 "params": { 00:19:29.771 "trtype": "pcie", 00:19:29.771 "name": "Nvme0", 00:19:29.771 "traddr": "0000:00:06.0" 00:19:29.771 }, 00:19:29.771 "method": "bdev_nvme_attach_controller" 00:19:29.771 }, 00:19:29.771 { 00:19:29.771 "method": "bdev_wait_for_examine" 00:19:29.771 } 00:19:29.771 ] 00:19:29.771 } 00:19:29.771 ] 00:19:29.771 } 00:19:29.771 [2024-05-15 04:56:43.966394] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:29.771 [2024-05-15 04:56:43.966542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58607 ] 00:19:30.029 [2024-05-15 04:56:44.142588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.288 [2024-05-15 04:56:44.364853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.230  Copying: 4096/4096 [B] (average 4000 kBps) 00:19:32.230 00:19:32.230 04:56:46 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:19:32.230 04:56:46 -- dd/basic_rw.sh@65 -- # gen_conf 00:19:32.230 04:56:46 -- dd/common.sh@31 -- # xtrace_disable 00:19:32.230 04:56:46 -- common/autotest_common.sh@10 -- # set +x 00:19:32.230 { 00:19:32.230 "subsystems": [ 00:19:32.230 { 00:19:32.230 "subsystem": "bdev", 00:19:32.230 "config": [ 00:19:32.230 { 00:19:32.230 "params": { 00:19:32.230 "trtype": "pcie", 00:19:32.230 "name": "Nvme0", 00:19:32.230 "traddr": "0000:00:06.0" 00:19:32.230 }, 00:19:32.230 "method": "bdev_nvme_attach_controller" 00:19:32.230 }, 00:19:32.230 { 00:19:32.230 "method": "bdev_wait_for_examine" 00:19:32.230 } 00:19:32.230 ] 00:19:32.230 } 00:19:32.230 ] 00:19:32.230 } 00:19:32.230 [2024-05-15 04:56:46.346631] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:32.230 [2024-05-15 04:56:46.346923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58640 ] 00:19:32.489 [2024-05-15 04:56:46.497528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.489 [2024-05-15 04:56:46.719841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.431  Copying: 4096/4096 [B] (average 4000 kBps) 00:19:34.431 00:19:34.431 04:56:48 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:19:34.432 04:56:48 -- dd/basic_rw.sh@72 -- # [[ 2qypmx9em2p8mlpsdktn47d90jvvaxpy4ccojsj87q4xx1pwoekjk78nxor79j5jyc3kepjwsu4mu5vsdxcqf42hlritiqnm0u4srv8yo1w7k248o0vneque0spfwrghksn4yvlridwzkog09axgy813c0faoqj19sh8i652wamuij91x5asxyokcds6ci1b09di4fvp0yr4qky848ekuyfdthhu3tbafgi7pepz37cj2r2dsj7bnnkgdpnr9ewkbrgvbyyxifincgzubhocnx6q3pr9krz1p5d7amjjl1yfemriwscafgazconyqmbh15sznx60jkj64o03zbarj6qkp6g18gp2yp1w8wcht38hiwxafonedqgf7ssc4b6y7rra0jte6bimpy7h10d8oi99ais4kaatbtpm893td7rvg4ox31x6t759fn2l7mnwb5iy2wf5q5qg404c6mcsatvt8dwk02oeupy6tc81mh7mecymhla2x0yy8qt1knf9ebqggsqywbsu3vkt46qumsaz7qvfsn85qay9zfx1krmxtif81v207qdpxrefqogs9uwvyiqh89tq6plfiq3nuf39o1w4frl0p0gkbjskq9xq1y49s63xf66w5gl0irh75iq6htdk6z2xd97zxaqnz09ioi8hg6uqx008rczgwhktfvjfpfaf178eddo70100r10b12m4ny9y7ba4asdp4deen8n7psnxddeq4xh4ayk35v8wlqs0lph4et2fsuz9g508nuj95pnwqzhnhjw608qcl0y54yqbhlfzaprl1eioix9culm0ohyqv2kmiz1kg85etdr6tqmgtwbeoc8zcomf6afyszadfbl7qffgwellxlkrtr6iib6jm7i2jqnu2ab9ia37nq11wps9lhepfeymj5mdwjfl34aq9k3q2d5j9wze5mvpdqc5kna556hnjeqogg1g8n2xhf07i50pj2qukwxwotyan31ey2qvl9xrc59h4c0rdzgtq29mcsyjcvnvqinejmv3kdoshz6q7fvhsm8q0es111nd5zlu0x0sov9rkumin0ok8ao5y4tffpvfqq2pimkiy0wtrl7on2822umtdt5g5z4pgaq115w3ge5cigoq2uwsxng0udd5becniiwirex1wuv65lrs9bpfson9w4loqj3n7wk8ym2g496vuf0fqfitooxonrmiht9ckcnbdd1tlkwbnig2yndxcey4yqj26igqqlyjxdi4yggy0q7yj1vp87xff6benucx3x2sfkq5ezqnf63wsyvxm9aoc279fg0ckoaj2peias67dnv26dnxz1c989b9j8s7a0hlyjcjofh9dnm0vyl8g13os725a0e53vm6z499ucfbtohjalvk91oiiq0q4b5d9xzrwz6y8h2sg4y327i8secfv24o03r9bg1kmjkt8el613rk230y30auj7iph29x4jzjfhv213kcn8wf3qoxsj337t2n8vzdv22x6tjo9zc2b9fh3yt1gxlktlzjpd8segssgil8x1d0yknj8nxwe6646b30g5pspx8bh4iijk8qs81fwegwh6390xjf7r5c41cuxxe223gsa7lq86op22emufmjldrl6ikfab79vfsfxh52xk8takyf5ywa3ga3oz5c1vrz4p3q7ddbeqfbyyafqfykb1vsoml2jyv2mab0080gg2xzncfb85rmewjjr6iyji66jxfsedk0pgxtwvtolwa06xkw57tonytp091gn8o5ftfct7es3vw7yb00ylpl16ge0ki0lngw3vp4oktsi4n0k80c2sbu7n8a3ynyqwy1qmkc83awusx3e9g4tixesp7nmygglm29waj9rege7m6ptpqphi7apnu9vvhmet990xe3xa9fc3g6mpwvwbs8e52oai1a9bn06sbnisv5l1sazsh1ybmsmukmlhonnz4287rl3hkvnlte34uisbhztuh1qd3hvbjowieq940580m6qa3nr710gh23gk9sq73rqrwes0iy4617hud12yyuztkcip5yb7msbtv80whmjgzqgeyd1bw952ow37eh6zmmuxtvy9nxnb42hyx1qn5odnh7wz1adxe0lbkeae3kseat2givq9vmi8ewn12kk9wiz8sqz3mevor9klh2idgi6499ixfl5f9vt0m035mri9uhyvnq3nl9umfcubgx5isapt9uy3son24xyqbp551zkh0fxx4y7tle5kgj609q1avq3czqnu3j4npd1szcu1zape3f5ypruz0o29q745xf2wa7jmidb8n6omauav1fdxtsegc32kue1qc0zrnvcephlzhbtv17y4mdtsjueqs73scm22niemf25mb01nh673a7sjv923esggty44xzr52harc6f6bjm1zms7xrjovnhb0kz7odrq1dvzzo6254dvsouxd6jl7xt9qgf4b9kcxqbv4nf13zpufj6qjkn7bfq295mu2untaf693sovs59xqqmyxxn6wwsdd2g3a8b8vfl8hp83a8konbfyv6ixtdregnj0jooh5nimhegscmye6spm9ri334h5t5ekca8wmwh0kwgscu2rki3lu2n897sts10zgsrj4xyb7gh4cmf84g08umemqnzcb3gm75mjo5qqxom07qygap7nz2npd70xsrzdlfrynaldw9au2ze0cs0vjp6yfacj39l5yes5edhpdpowrmljkdawou8k6a5ahbdlxld1rml834mbjhgf9xaq9f5935ggjrgvdqn77noen6fhzx3rpebwm02hi83r0yxqkueisuly3p6ivcn78xwpuu26ya9mtzckhvqk3iad31ohtge6tgbw5dwha80gfrsz6fsipv9jdlzj2sqiod2rg5lt7ykeplnfvdbwaqligctn5g8auamexhm8iwzvyvhsswukjwdbkapvy9327li4bfkolgil4oatb0orr9kgn2444vyrp0uvzitv3d9rtf5uh4doiit3o9m4ampcix96menuxdcq5ycufj854xj7kxva1cistzi06do6h4m1vyybc056dukltt7gb9j8l31rb28ijynaedsedb24abc2aduak7t8ht50rjnrck26algqel4gh9gsj8ch4zeyf6c3chm2ezhawygia4f3l5pjjnd3pyob8xj31e5y77pfbqpc6yne7qcl5bra82diwc6mhmg4e9zzowxrmm8ay6wfrzv9yxali0pfw0551o45vxno5od6kpkggco92fky9ac866g7uxa6nl75orbrfx52823zainu9y0mbr8ceafdvgb6b64aia1wbb29oqh9pk1pxyl9badvlt940ez0cvz7z231uqol0l7bbjhz7y951w9buudvgaprau4l7b2jjz0h448fhmq8h5bcxjxg9y66vbcatnebruzc9ta8kwarrpxx57jz87ruwn78ybgxi6nzc66pz50kzcfk7c56rmz6x6z6ouxg2gqyjxn3knubo27cd8scj0t9mah2v201a3ux9ri8maruqp46oudlk3g4zmqywg7sgt61xfo6jdb9sazlr0ewo9cbnbxoepaevmqfoly9n8sxbnkq77sbz1ykcn72t1zfwwrtzimdy0etugfsr8ve1pyhhtoszhlmiqras2v1vmxvx0ijt5nyua3d2p84tayd44riils70lewtqb7rw732be3izmq5ehfjhy6oft95120c1yuoue4epkhhlx4zxfeg97c9cwuj8j29m17td78or3rx9bba631f7s12mta8dozeta0xcrnoamj62sm2xq6bgbf10hqozkelbdeqipkbwdrqyj9db262gr26tw4v6h35mhs4xyla1n0hpvumars2hlzc3lsyoeselkfikmoi8sa9n6b7ra5nnurp025tqrbzkaybahoyf4jz5ewlp8h4aejuo5a3vs9mhtd9zj418gmrj1potscynp4zhqy5m5vjrg2qalful898f51anpb2devpkf2nfykhwoqo6cabm946qsbpn4fl7ou6vz6f2lqvow5y8i6pr3zt8w0pyqqxx == \2\q\y\p\m\x\9\e\m\2\p\8\m\l\p\s\d\k\t\n\4\7\d\9\0\j\v\v\a\x\p\y\4\c\c\o\j\s\j\8\7\q\4\x\x\1\p\w\o\e\k\j\k\7\8\n\x\o\r\7\9\j\5\j\y\c\3\k\e\p\j\w\s\u\4\m\u\5\v\s\d\x\c\q\f\4\2\h\l\r\i\t\i\q\n\m\0\u\4\s\r\v\8\y\o\1\w\7\k\2\4\8\o\0\v\n\e\q\u\e\0\s\p\f\w\r\g\h\k\s\n\4\y\v\l\r\i\d\w\z\k\o\g\0\9\a\x\g\y\8\1\3\c\0\f\a\o\q\j\1\9\s\h\8\i\6\5\2\w\a\m\u\i\j\9\1\x\5\a\s\x\y\o\k\c\d\s\6\c\i\1\b\0\9\d\i\4\f\v\p\0\y\r\4\q\k\y\8\4\8\e\k\u\y\f\d\t\h\h\u\3\t\b\a\f\g\i\7\p\e\p\z\3\7\c\j\2\r\2\d\s\j\7\b\n\n\k\g\d\p\n\r\9\e\w\k\b\r\g\v\b\y\y\x\i\f\i\n\c\g\z\u\b\h\o\c\n\x\6\q\3\p\r\9\k\r\z\1\p\5\d\7\a\m\j\j\l\1\y\f\e\m\r\i\w\s\c\a\f\g\a\z\c\o\n\y\q\m\b\h\1\5\s\z\n\x\6\0\j\k\j\6\4\o\0\3\z\b\a\r\j\6\q\k\p\6\g\1\8\g\p\2\y\p\1\w\8\w\c\h\t\3\8\h\i\w\x\a\f\o\n\e\d\q\g\f\7\s\s\c\4\b\6\y\7\r\r\a\0\j\t\e\6\b\i\m\p\y\7\h\1\0\d\8\o\i\9\9\a\i\s\4\k\a\a\t\b\t\p\m\8\9\3\t\d\7\r\v\g\4\o\x\3\1\x\6\t\7\5\9\f\n\2\l\7\m\n\w\b\5\i\y\2\w\f\5\q\5\q\g\4\0\4\c\6\m\c\s\a\t\v\t\8\d\w\k\0\2\o\e\u\p\y\6\t\c\8\1\m\h\7\m\e\c\y\m\h\l\a\2\x\0\y\y\8\q\t\1\k\n\f\9\e\b\q\g\g\s\q\y\w\b\s\u\3\v\k\t\4\6\q\u\m\s\a\z\7\q\v\f\s\n\8\5\q\a\y\9\z\f\x\1\k\r\m\x\t\i\f\8\1\v\2\0\7\q\d\p\x\r\e\f\q\o\g\s\9\u\w\v\y\i\q\h\8\9\t\q\6\p\l\f\i\q\3\n\u\f\3\9\o\1\w\4\f\r\l\0\p\0\g\k\b\j\s\k\q\9\x\q\1\y\4\9\s\6\3\x\f\6\6\w\5\g\l\0\i\r\h\7\5\i\q\6\h\t\d\k\6\z\2\x\d\9\7\z\x\a\q\n\z\0\9\i\o\i\8\h\g\6\u\q\x\0\0\8\r\c\z\g\w\h\k\t\f\v\j\f\p\f\a\f\1\7\8\e\d\d\o\7\0\1\0\0\r\1\0\b\1\2\m\4\n\y\9\y\7\b\a\4\a\s\d\p\4\d\e\e\n\8\n\7\p\s\n\x\d\d\e\q\4\x\h\4\a\y\k\3\5\v\8\w\l\q\s\0\l\p\h\4\e\t\2\f\s\u\z\9\g\5\0\8\n\u\j\9\5\p\n\w\q\z\h\n\h\j\w\6\0\8\q\c\l\0\y\5\4\y\q\b\h\l\f\z\a\p\r\l\1\e\i\o\i\x\9\c\u\l\m\0\o\h\y\q\v\2\k\m\i\z\1\k\g\8\5\e\t\d\r\6\t\q\m\g\t\w\b\e\o\c\8\z\c\o\m\f\6\a\f\y\s\z\a\d\f\b\l\7\q\f\f\g\w\e\l\l\x\l\k\r\t\r\6\i\i\b\6\j\m\7\i\2\j\q\n\u\2\a\b\9\i\a\3\7\n\q\1\1\w\p\s\9\l\h\e\p\f\e\y\m\j\5\m\d\w\j\f\l\3\4\a\q\9\k\3\q\2\d\5\j\9\w\z\e\5\m\v\p\d\q\c\5\k\n\a\5\5\6\h\n\j\e\q\o\g\g\1\g\8\n\2\x\h\f\0\7\i\5\0\p\j\2\q\u\k\w\x\w\o\t\y\a\n\3\1\e\y\2\q\v\l\9\x\r\c\5\9\h\4\c\0\r\d\z\g\t\q\2\9\m\c\s\y\j\c\v\n\v\q\i\n\e\j\m\v\3\k\d\o\s\h\z\6\q\7\f\v\h\s\m\8\q\0\e\s\1\1\1\n\d\5\z\l\u\0\x\0\s\o\v\9\r\k\u\m\i\n\0\o\k\8\a\o\5\y\4\t\f\f\p\v\f\q\q\2\p\i\m\k\i\y\0\w\t\r\l\7\o\n\2\8\2\2\u\m\t\d\t\5\g\5\z\4\p\g\a\q\1\1\5\w\3\g\e\5\c\i\g\o\q\2\u\w\s\x\n\g\0\u\d\d\5\b\e\c\n\i\i\w\i\r\e\x\1\w\u\v\6\5\l\r\s\9\b\p\f\s\o\n\9\w\4\l\o\q\j\3\n\7\w\k\8\y\m\2\g\4\9\6\v\u\f\0\f\q\f\i\t\o\o\x\o\n\r\m\i\h\t\9\c\k\c\n\b\d\d\1\t\l\k\w\b\n\i\g\2\y\n\d\x\c\e\y\4\y\q\j\2\6\i\g\q\q\l\y\j\x\d\i\4\y\g\g\y\0\q\7\y\j\1\v\p\8\7\x\f\f\6\b\e\n\u\c\x\3\x\2\s\f\k\q\5\e\z\q\n\f\6\3\w\s\y\v\x\m\9\a\o\c\2\7\9\f\g\0\c\k\o\a\j\2\p\e\i\a\s\6\7\d\n\v\2\6\d\n\x\z\1\c\9\8\9\b\9\j\8\s\7\a\0\h\l\y\j\c\j\o\f\h\9\d\n\m\0\v\y\l\8\g\1\3\o\s\7\2\5\a\0\e\5\3\v\m\6\z\4\9\9\u\c\f\b\t\o\h\j\a\l\v\k\9\1\o\i\i\q\0\q\4\b\5\d\9\x\z\r\w\z\6\y\8\h\2\s\g\4\y\3\2\7\i\8\s\e\c\f\v\2\4\o\0\3\r\9\b\g\1\k\m\j\k\t\8\e\l\6\1\3\r\k\2\3\0\y\3\0\a\u\j\7\i\p\h\2\9\x\4\j\z\j\f\h\v\2\1\3\k\c\n\8\w\f\3\q\o\x\s\j\3\3\7\t\2\n\8\v\z\d\v\2\2\x\6\t\j\o\9\z\c\2\b\9\f\h\3\y\t\1\g\x\l\k\t\l\z\j\p\d\8\s\e\g\s\s\g\i\l\8\x\1\d\0\y\k\n\j\8\n\x\w\e\6\6\4\6\b\3\0\g\5\p\s\p\x\8\b\h\4\i\i\j\k\8\q\s\8\1\f\w\e\g\w\h\6\3\9\0\x\j\f\7\r\5\c\4\1\c\u\x\x\e\2\2\3\g\s\a\7\l\q\8\6\o\p\2\2\e\m\u\f\m\j\l\d\r\l\6\i\k\f\a\b\7\9\v\f\s\f\x\h\5\2\x\k\8\t\a\k\y\f\5\y\w\a\3\g\a\3\o\z\5\c\1\v\r\z\4\p\3\q\7\d\d\b\e\q\f\b\y\y\a\f\q\f\y\k\b\1\v\s\o\m\l\2\j\y\v\2\m\a\b\0\0\8\0\g\g\2\x\z\n\c\f\b\8\5\r\m\e\w\j\j\r\6\i\y\j\i\6\6\j\x\f\s\e\d\k\0\p\g\x\t\w\v\t\o\l\w\a\0\6\x\k\w\5\7\t\o\n\y\t\p\0\9\1\g\n\8\o\5\f\t\f\c\t\7\e\s\3\v\w\7\y\b\0\0\y\l\p\l\1\6\g\e\0\k\i\0\l\n\g\w\3\v\p\4\o\k\t\s\i\4\n\0\k\8\0\c\2\s\b\u\7\n\8\a\3\y\n\y\q\w\y\1\q\m\k\c\8\3\a\w\u\s\x\3\e\9\g\4\t\i\x\e\s\p\7\n\m\y\g\g\l\m\2\9\w\a\j\9\r\e\g\e\7\m\6\p\t\p\q\p\h\i\7\a\p\n\u\9\v\v\h\m\e\t\9\9\0\x\e\3\x\a\9\f\c\3\g\6\m\p\w\v\w\b\s\8\e\5\2\o\a\i\1\a\9\b\n\0\6\s\b\n\i\s\v\5\l\1\s\a\z\s\h\1\y\b\m\s\m\u\k\m\l\h\o\n\n\z\4\2\8\7\r\l\3\h\k\v\n\l\t\e\3\4\u\i\s\b\h\z\t\u\h\1\q\d\3\h\v\b\j\o\w\i\e\q\9\4\0\5\8\0\m\6\q\a\3\n\r\7\1\0\g\h\2\3\g\k\9\s\q\7\3\r\q\r\w\e\s\0\i\y\4\6\1\7\h\u\d\1\2\y\y\u\z\t\k\c\i\p\5\y\b\7\m\s\b\t\v\8\0\w\h\m\j\g\z\q\g\e\y\d\1\b\w\9\5\2\o\w\3\7\e\h\6\z\m\m\u\x\t\v\y\9\n\x\n\b\4\2\h\y\x\1\q\n\5\o\d\n\h\7\w\z\1\a\d\x\e\0\l\b\k\e\a\e\3\k\s\e\a\t\2\g\i\v\q\9\v\m\i\8\e\w\n\1\2\k\k\9\w\i\z\8\s\q\z\3\m\e\v\o\r\9\k\l\h\2\i\d\g\i\6\4\9\9\i\x\f\l\5\f\9\v\t\0\m\0\3\5\m\r\i\9\u\h\y\v\n\q\3\n\l\9\u\m\f\c\u\b\g\x\5\i\s\a\p\t\9\u\y\3\s\o\n\2\4\x\y\q\b\p\5\5\1\z\k\h\0\f\x\x\4\y\7\t\l\e\5\k\g\j\6\0\9\q\1\a\v\q\3\c\z\q\n\u\3\j\4\n\p\d\1\s\z\c\u\1\z\a\p\e\3\f\5\y\p\r\u\z\0\o\2\9\q\7\4\5\x\f\2\w\a\7\j\m\i\d\b\8\n\6\o\m\a\u\a\v\1\f\d\x\t\s\e\g\c\3\2\k\u\e\1\q\c\0\z\r\n\v\c\e\p\h\l\z\h\b\t\v\1\7\y\4\m\d\t\s\j\u\e\q\s\7\3\s\c\m\2\2\n\i\e\m\f\2\5\m\b\0\1\n\h\6\7\3\a\7\s\j\v\9\2\3\e\s\g\g\t\y\4\4\x\z\r\5\2\h\a\r\c\6\f\6\b\j\m\1\z\m\s\7\x\r\j\o\v\n\h\b\0\k\z\7\o\d\r\q\1\d\v\z\z\o\6\2\5\4\d\v\s\o\u\x\d\6\j\l\7\x\t\9\q\g\f\4\b\9\k\c\x\q\b\v\4\n\f\1\3\z\p\u\f\j\6\q\j\k\n\7\b\f\q\2\9\5\m\u\2\u\n\t\a\f\6\9\3\s\o\v\s\5\9\x\q\q\m\y\x\x\n\6\w\w\s\d\d\2\g\3\a\8\b\8\v\f\l\8\h\p\8\3\a\8\k\o\n\b\f\y\v\6\i\x\t\d\r\e\g\n\j\0\j\o\o\h\5\n\i\m\h\e\g\s\c\m\y\e\6\s\p\m\9\r\i\3\3\4\h\5\t\5\e\k\c\a\8\w\m\w\h\0\k\w\g\s\c\u\2\r\k\i\3\l\u\2\n\8\9\7\s\t\s\1\0\z\g\s\r\j\4\x\y\b\7\g\h\4\c\m\f\8\4\g\0\8\u\m\e\m\q\n\z\c\b\3\g\m\7\5\m\j\o\5\q\q\x\o\m\0\7\q\y\g\a\p\7\n\z\2\n\p\d\7\0\x\s\r\z\d\l\f\r\y\n\a\l\d\w\9\a\u\2\z\e\0\c\s\0\v\j\p\6\y\f\a\c\j\3\9\l\5\y\e\s\5\e\d\h\p\d\p\o\w\r\m\l\j\k\d\a\w\o\u\8\k\6\a\5\a\h\b\d\l\x\l\d\1\r\m\l\8\3\4\m\b\j\h\g\f\9\x\a\q\9\f\5\9\3\5\g\g\j\r\g\v\d\q\n\7\7\n\o\e\n\6\f\h\z\x\3\r\p\e\b\w\m\0\2\h\i\8\3\r\0\y\x\q\k\u\e\i\s\u\l\y\3\p\6\i\v\c\n\7\8\x\w\p\u\u\2\6\y\a\9\m\t\z\c\k\h\v\q\k\3\i\a\d\3\1\o\h\t\g\e\6\t\g\b\w\5\d\w\h\a\8\0\g\f\r\s\z\6\f\s\i\p\v\9\j\d\l\z\j\2\s\q\i\o\d\2\r\g\5\l\t\7\y\k\e\p\l\n\f\v\d\b\w\a\q\l\i\g\c\t\n\5\g\8\a\u\a\m\e\x\h\m\8\i\w\z\v\y\v\h\s\s\w\u\k\j\w\d\b\k\a\p\v\y\9\3\2\7\l\i\4\b\f\k\o\l\g\i\l\4\o\a\t\b\0\o\r\r\9\k\g\n\2\4\4\4\v\y\r\p\0\u\v\z\i\t\v\3\d\9\r\t\f\5\u\h\4\d\o\i\i\t\3\o\9\m\4\a\m\p\c\i\x\9\6\m\e\n\u\x\d\c\q\5\y\c\u\f\j\8\5\4\x\j\7\k\x\v\a\1\c\i\s\t\z\i\0\6\d\o\6\h\4\m\1\v\y\y\b\c\0\5\6\d\u\k\l\t\t\7\g\b\9\j\8\l\3\1\r\b\2\8\i\j\y\n\a\e\d\s\e\d\b\2\4\a\b\c\2\a\d\u\a\k\7\t\8\h\t\5\0\r\j\n\r\c\k\2\6\a\l\g\q\e\l\4\g\h\9\g\s\j\8\c\h\4\z\e\y\f\6\c\3\c\h\m\2\e\z\h\a\w\y\g\i\a\4\f\3\l\5\p\j\j\n\d\3\p\y\o\b\8\x\j\3\1\e\5\y\7\7\p\f\b\q\p\c\6\y\n\e\7\q\c\l\5\b\r\a\8\2\d\i\w\c\6\m\h\m\g\4\e\9\z\z\o\w\x\r\m\m\8\a\y\6\w\f\r\z\v\9\y\x\a\l\i\0\p\f\w\0\5\5\1\o\4\5\v\x\n\o\5\o\d\6\k\p\k\g\g\c\o\9\2\f\k\y\9\a\c\8\6\6\g\7\u\x\a\6\n\l\7\5\o\r\b\r\f\x\5\2\8\2\3\z\a\i\n\u\9\y\0\m\b\r\8\c\e\a\f\d\v\g\b\6\b\6\4\a\i\a\1\w\b\b\2\9\o\q\h\9\p\k\1\p\x\y\l\9\b\a\d\v\l\t\9\4\0\e\z\0\c\v\z\7\z\2\3\1\u\q\o\l\0\l\7\b\b\j\h\z\7\y\9\5\1\w\9\b\u\u\d\v\g\a\p\r\a\u\4\l\7\b\2\j\j\z\0\h\4\4\8\f\h\m\q\8\h\5\b\c\x\j\x\g\9\y\6\6\v\b\c\a\t\n\e\b\r\u\z\c\9\t\a\8\k\w\a\r\r\p\x\x\5\7\j\z\8\7\r\u\w\n\7\8\y\b\g\x\i\6\n\z\c\6\6\p\z\5\0\k\z\c\f\k\7\c\5\6\r\m\z\6\x\6\z\6\o\u\x\g\2\g\q\y\j\x\n\3\k\n\u\b\o\2\7\c\d\8\s\c\j\0\t\9\m\a\h\2\v\2\0\1\a\3\u\x\9\r\i\8\m\a\r\u\q\p\4\6\o\u\d\l\k\3\g\4\z\m\q\y\w\g\7\s\g\t\6\1\x\f\o\6\j\d\b\9\s\a\z\l\r\0\e\w\o\9\c\b\n\b\x\o\e\p\a\e\v\m\q\f\o\l\y\9\n\8\s\x\b\n\k\q\7\7\s\b\z\1\y\k\c\n\7\2\t\1\z\f\w\w\r\t\z\i\m\d\y\0\e\t\u\g\f\s\r\8\v\e\1\p\y\h\h\t\o\s\z\h\l\m\i\q\r\a\s\2\v\1\v\m\x\v\x\0\i\j\t\5\n\y\u\a\3\d\2\p\8\4\t\a\y\d\4\4\r\i\i\l\s\7\0\l\e\w\t\q\b\7\r\w\7\3\2\b\e\3\i\z\m\q\5\e\h\f\j\h\y\6\o\f\t\9\5\1\2\0\c\1\y\u\o\u\e\4\e\p\k\h\h\l\x\4\z\x\f\e\g\9\7\c\9\c\w\u\j\8\j\2\9\m\1\7\t\d\7\8\o\r\3\r\x\9\b\b\a\6\3\1\f\7\s\1\2\m\t\a\8\d\o\z\e\t\a\0\x\c\r\n\o\a\m\j\6\2\s\m\2\x\q\6\b\g\b\f\1\0\h\q\o\z\k\e\l\b\d\e\q\i\p\k\b\w\d\r\q\y\j\9\d\b\2\6\2\g\r\2\6\t\w\4\v\6\h\3\5\m\h\s\4\x\y\l\a\1\n\0\h\p\v\u\m\a\r\s\2\h\l\z\c\3\l\s\y\o\e\s\e\l\k\f\i\k\m\o\i\8\s\a\9\n\6\b\7\r\a\5\n\n\u\r\p\0\2\5\t\q\r\b\z\k\a\y\b\a\h\o\y\f\4\j\z\5\e\w\l\p\8\h\4\a\e\j\u\o\5\a\3\v\s\9\m\h\t\d\9\z\j\4\1\8\g\m\r\j\1\p\o\t\s\c\y\n\p\4\z\h\q\y\5\m\5\v\j\r\g\2\q\a\l\f\u\l\8\9\8\f\5\1\a\n\p\b\2\d\e\v\p\k\f\2\n\f\y\k\h\w\o\q\o\6\c\a\b\m\9\4\6\q\s\b\p\n\4\f\l\7\o\u\6\v\z\6\f\2\l\q\v\o\w\5\y\8\i\6\p\r\3\z\t\8\w\0\p\y\q\q\x\x ]] 00:19:34.432 ************************************ 00:19:34.432 END TEST dd_rw_offset 00:19:34.432 ************************************ 00:19:34.432 00:19:34.432 real 0m4.773s 00:19:34.432 user 0m3.802s 00:19:34.432 sys 0m0.708s 00:19:34.432 04:56:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.432 04:56:48 -- common/autotest_common.sh@10 -- # set +x 00:19:34.432 04:56:48 -- dd/basic_rw.sh@1 -- # cleanup 00:19:34.432 04:56:48 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:19:34.432 04:56:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:34.432 04:56:48 -- dd/common.sh@11 -- # local nvme_ref= 00:19:34.432 04:56:48 -- dd/common.sh@12 -- # local size=0xffff 00:19:34.432 04:56:48 -- dd/common.sh@14 -- # local bs=1048576 00:19:34.432 04:56:48 -- dd/common.sh@15 -- # local count=1 00:19:34.432 04:56:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:34.432 04:56:48 -- dd/common.sh@18 -- # gen_conf 00:19:34.432 04:56:48 -- dd/common.sh@31 -- # xtrace_disable 00:19:34.432 04:56:48 -- common/autotest_common.sh@10 -- # set +x 00:19:34.432 { 00:19:34.432 "subsystems": [ 00:19:34.432 { 00:19:34.432 "subsystem": "bdev", 00:19:34.432 "config": [ 00:19:34.432 { 00:19:34.432 "params": { 00:19:34.432 "trtype": "pcie", 00:19:34.432 "name": "Nvme0", 00:19:34.432 "traddr": "0000:00:06.0" 00:19:34.432 }, 00:19:34.432 "method": "bdev_nvme_attach_controller" 00:19:34.432 }, 00:19:34.432 { 00:19:34.432 "method": "bdev_wait_for_examine" 00:19:34.432 } 00:19:34.432 ] 00:19:34.432 } 00:19:34.432 ] 00:19:34.432 } 00:19:34.690 [2024-05-15 04:56:48.725996] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:34.690 [2024-05-15 04:56:48.726155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58687 ] 00:19:34.690 [2024-05-15 04:56:48.879578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.949 [2024-05-15 04:56:49.107069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.891  Copying: 1024/1024 [kB] (average 500 MBps) 00:19:36.891 00:19:36.891 04:56:50 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:36.891 00:19:36.891 real 0m55.796s 00:19:36.891 user 0m44.592s 00:19:36.891 sys 0m8.039s 00:19:36.891 04:56:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:36.891 04:56:50 -- common/autotest_common.sh@10 -- # set +x 00:19:36.891 ************************************ 00:19:36.891 END TEST spdk_dd_basic_rw 00:19:36.891 ************************************ 00:19:36.891 04:56:51 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:19:36.891 04:56:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:36.891 04:56:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:36.891 04:56:51 -- common/autotest_common.sh@10 -- # set +x 00:19:36.891 ************************************ 00:19:36.891 START TEST spdk_dd_posix 00:19:36.891 ************************************ 00:19:36.891 04:56:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:19:36.891 * Looking for test storage... 00:19:36.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:36.891 04:56:51 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:36.891 04:56:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.891 04:56:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.891 04:56:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.891 04:56:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:36.891 04:56:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:36.891 04:56:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:36.891 04:56:51 -- paths/export.sh@5 -- # export PATH 00:19:36.891 04:56:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:36.891 04:56:51 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:19:36.891 04:56:51 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:19:36.891 04:56:51 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:19:36.891 04:56:51 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:19:36.891 04:56:51 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:36.891 04:56:51 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:36.891 04:56:51 -- dd/posix.sh@130 -- # tests 00:19:36.891 04:56:51 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:19:36.891 * First test run, using AIO 00:19:36.891 04:56:51 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:19:36.891 04:56:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:36.891 04:56:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:36.891 04:56:51 -- common/autotest_common.sh@10 -- # set +x 00:19:37.149 ************************************ 00:19:37.149 START TEST dd_flag_append 00:19:37.149 ************************************ 00:19:37.149 04:56:51 -- common/autotest_common.sh@1104 -- # append 00:19:37.149 04:56:51 -- dd/posix.sh@16 -- # local dump0 00:19:37.149 04:56:51 -- dd/posix.sh@17 -- # local dump1 00:19:37.149 04:56:51 -- dd/posix.sh@19 -- # gen_bytes 32 00:19:37.149 04:56:51 -- dd/common.sh@98 -- # xtrace_disable 00:19:37.149 04:56:51 -- common/autotest_common.sh@10 -- # set +x 00:19:37.149 04:56:51 -- dd/posix.sh@19 -- # dump0=8jgqrf3jbn51xatpt9x30t4q5jsjmfux 00:19:37.149 04:56:51 -- dd/posix.sh@20 -- # gen_bytes 32 00:19:37.149 04:56:51 -- dd/common.sh@98 -- # xtrace_disable 00:19:37.149 04:56:51 -- common/autotest_common.sh@10 -- # set +x 00:19:37.149 04:56:51 -- dd/posix.sh@20 -- # dump1=6ps1nfxvh0jaa5xtxcbox0edrewr0pon 00:19:37.149 04:56:51 -- dd/posix.sh@22 -- # printf %s 8jgqrf3jbn51xatpt9x30t4q5jsjmfux 00:19:37.149 04:56:51 -- dd/posix.sh@23 -- # printf %s 6ps1nfxvh0jaa5xtxcbox0edrewr0pon 00:19:37.149 04:56:51 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:19:37.149 [2024-05-15 04:56:51.276790] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:37.150 [2024-05-15 04:56:51.276957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58785 ] 00:19:37.407 [2024-05-15 04:56:51.428486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.666 [2024-05-15 04:56:51.655184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.297  Copying: 32/32 [B] (average 31 kBps) 00:19:39.297 00:19:39.297 ************************************ 00:19:39.297 END TEST dd_flag_append 00:19:39.297 ************************************ 00:19:39.297 04:56:53 -- dd/posix.sh@27 -- # [[ 6ps1nfxvh0jaa5xtxcbox0edrewr0pon8jgqrf3jbn51xatpt9x30t4q5jsjmfux == \6\p\s\1\n\f\x\v\h\0\j\a\a\5\x\t\x\c\b\o\x\0\e\d\r\e\w\r\0\p\o\n\8\j\g\q\r\f\3\j\b\n\5\1\x\a\t\p\t\9\x\3\0\t\4\q\5\j\s\j\m\f\u\x ]] 00:19:39.297 00:19:39.297 real 0m2.318s 00:19:39.297 user 0m1.797s 00:19:39.297 sys 0m0.319s 00:19:39.297 04:56:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.297 04:56:53 -- common/autotest_common.sh@10 -- # set +x 00:19:39.297 04:56:53 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:19:39.297 04:56:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:39.297 04:56:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:39.297 04:56:53 -- common/autotest_common.sh@10 -- # set +x 00:19:39.297 ************************************ 00:19:39.297 START TEST dd_flag_directory 00:19:39.297 ************************************ 00:19:39.297 04:56:53 -- common/autotest_common.sh@1104 -- # directory 00:19:39.297 04:56:53 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:39.297 04:56:53 -- common/autotest_common.sh@640 -- # local es=0 00:19:39.297 04:56:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:39.297 04:56:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.297 04:56:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.297 04:56:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.297 04:56:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.297 04:56:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.297 04:56:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.297 04:56:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.297 04:56:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:39.297 04:56:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:39.555 [2024-05-15 04:56:53.641976] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:39.555 [2024-05-15 04:56:53.642130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58842 ] 00:19:39.813 [2024-05-15 04:56:53.793408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.813 [2024-05-15 04:56:54.021276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.381 [2024-05-15 04:56:54.438412] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:40.381 [2024-05-15 04:56:54.438496] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:40.381 [2024-05-15 04:56:54.438523] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:41.314 [2024-05-15 04:56:55.345285] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:41.573 04:56:55 -- common/autotest_common.sh@643 -- # es=236 00:19:41.573 04:56:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:41.573 04:56:55 -- common/autotest_common.sh@652 -- # es=108 00:19:41.573 04:56:55 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:41.573 04:56:55 -- common/autotest_common.sh@660 -- # es=1 00:19:41.573 04:56:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:41.573 04:56:55 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:41.573 04:56:55 -- common/autotest_common.sh@640 -- # local es=0 00:19:41.573 04:56:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:41.573 04:56:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:41.573 04:56:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:41.573 04:56:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:41.573 04:56:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:41.573 04:56:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:41.573 04:56:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:41.573 04:56:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:41.573 04:56:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:41.573 04:56:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:41.831 [2024-05-15 04:56:55.919598] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:41.831 [2024-05-15 04:56:55.919922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58874 ] 00:19:42.089 [2024-05-15 04:56:56.072461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.089 [2024-05-15 04:56:56.297319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.655 [2024-05-15 04:56:56.734845] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:42.655 [2024-05-15 04:56:56.734938] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:42.655 [2024-05-15 04:56:56.734986] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:43.591 [2024-05-15 04:56:57.622428] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:43.850 04:56:58 -- common/autotest_common.sh@643 -- # es=236 00:19:43.850 04:56:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:43.850 04:56:58 -- common/autotest_common.sh@652 -- # es=108 00:19:43.850 04:56:58 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:43.850 04:56:58 -- common/autotest_common.sh@660 -- # es=1 00:19:43.850 04:56:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:43.850 ************************************ 00:19:43.850 END TEST dd_flag_directory 00:19:43.850 ************************************ 00:19:43.850 00:19:43.850 real 0m4.559s 00:19:43.850 user 0m3.536s 00:19:43.850 sys 0m0.621s 00:19:43.850 04:56:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.850 04:56:58 -- common/autotest_common.sh@10 -- # set +x 00:19:44.110 04:56:58 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:19:44.110 04:56:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:44.110 04:56:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:44.110 04:56:58 -- common/autotest_common.sh@10 -- # set +x 00:19:44.110 ************************************ 00:19:44.110 START TEST dd_flag_nofollow 00:19:44.110 ************************************ 00:19:44.110 04:56:58 -- common/autotest_common.sh@1104 -- # nofollow 00:19:44.110 04:56:58 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:19:44.110 04:56:58 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:19:44.110 04:56:58 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:19:44.110 04:56:58 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:19:44.110 04:56:58 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:44.110 04:56:58 -- common/autotest_common.sh@640 -- # local es=0 00:19:44.110 04:56:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:44.110 04:56:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.110 04:56:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:44.110 04:56:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.110 04:56:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:44.110 04:56:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.110 04:56:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:44.110 04:56:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.110 04:56:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:44.110 04:56:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:44.110 [2024-05-15 04:56:58.263979] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:44.110 [2024-05-15 04:56:58.264137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58933 ] 00:19:44.369 [2024-05-15 04:56:58.427687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.627 [2024-05-15 04:56:58.647801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.886 [2024-05-15 04:56:59.066251] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:19:44.886 [2024-05-15 04:56:59.066334] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:19:44.886 [2024-05-15 04:56:59.066361] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:45.853 [2024-05-15 04:56:59.963888] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:46.418 04:57:00 -- common/autotest_common.sh@643 -- # es=216 00:19:46.418 04:57:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:46.418 04:57:00 -- common/autotest_common.sh@652 -- # es=88 00:19:46.418 04:57:00 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:46.418 04:57:00 -- common/autotest_common.sh@660 -- # es=1 00:19:46.418 04:57:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:46.418 04:57:00 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:46.419 04:57:00 -- common/autotest_common.sh@640 -- # local es=0 00:19:46.419 04:57:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:46.419 04:57:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.419 04:57:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:46.419 04:57:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.419 04:57:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:46.419 04:57:00 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.419 04:57:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:46.419 04:57:00 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.419 04:57:00 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:46.419 04:57:00 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:46.419 [2024-05-15 04:57:00.554132] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:46.419 [2024-05-15 04:57:00.554294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58960 ] 00:19:46.677 [2024-05-15 04:57:00.714809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.936 [2024-05-15 04:57:00.937924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.195 [2024-05-15 04:57:01.377623] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:19:47.195 [2024-05-15 04:57:01.377703] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:19:47.195 [2024-05-15 04:57:01.377992] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:48.129 [2024-05-15 04:57:02.264473] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:48.696 04:57:02 -- common/autotest_common.sh@643 -- # es=216 00:19:48.696 04:57:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:48.696 04:57:02 -- common/autotest_common.sh@652 -- # es=88 00:19:48.696 04:57:02 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:48.696 04:57:02 -- common/autotest_common.sh@660 -- # es=1 00:19:48.696 04:57:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:48.696 04:57:02 -- dd/posix.sh@46 -- # gen_bytes 512 00:19:48.696 04:57:02 -- dd/common.sh@98 -- # xtrace_disable 00:19:48.696 04:57:02 -- common/autotest_common.sh@10 -- # set +x 00:19:48.696 04:57:02 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:48.696 [2024-05-15 04:57:02.827905] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:48.696 [2024-05-15 04:57:02.828055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58991 ] 00:19:48.954 [2024-05-15 04:57:02.982581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.212 [2024-05-15 04:57:03.217694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.843  Copying: 512/512 [B] (average 500 kBps) 00:19:50.843 00:19:50.843 ************************************ 00:19:50.843 END TEST dd_flag_nofollow 00:19:50.843 ************************************ 00:19:50.843 04:57:04 -- dd/posix.sh@49 -- # [[ np9vi62t6kinv478p88l3yoxgrh65s01khm9etizuzcddailbwjk1t1t11wn9e4b2acb8fwweog4hwdrnudu5gc99lxdgyqshytvo0ugut8w1ydnkg5dn7sslttizmzvxr93kpq1o87ma05tq2o4aa1ute5ogwy46h0s5ozifxmumzbklviykpn0dnh8wc6jknzl9s4s57g87l05a7wdvet51itkhtxyi2p9f2cl4am2hn4h2gu3eovht9v7jr2dqc3ep1kc3mhprl4wq4le8j4tr678ap3hv9o289ris9fxpxr2oj2j0xvbgco5y6o76j2fbsbluw13i25pg0islz7mjvhnu68ycd8rt6clwr78lblxp7vs3zhtl8m8qytqjogo5jew8ywt54ea9dgt622nmbyor64ibnna3g41vkdazrarqlp3u8ykcuq24wshq95mhktuc33k62bypxrife6fxq50gouejv6cztq3530fwq6ww6ica74a68oww0i8 == \n\p\9\v\i\6\2\t\6\k\i\n\v\4\7\8\p\8\8\l\3\y\o\x\g\r\h\6\5\s\0\1\k\h\m\9\e\t\i\z\u\z\c\d\d\a\i\l\b\w\j\k\1\t\1\t\1\1\w\n\9\e\4\b\2\a\c\b\8\f\w\w\e\o\g\4\h\w\d\r\n\u\d\u\5\g\c\9\9\l\x\d\g\y\q\s\h\y\t\v\o\0\u\g\u\t\8\w\1\y\d\n\k\g\5\d\n\7\s\s\l\t\t\i\z\m\z\v\x\r\9\3\k\p\q\1\o\8\7\m\a\0\5\t\q\2\o\4\a\a\1\u\t\e\5\o\g\w\y\4\6\h\0\s\5\o\z\i\f\x\m\u\m\z\b\k\l\v\i\y\k\p\n\0\d\n\h\8\w\c\6\j\k\n\z\l\9\s\4\s\5\7\g\8\7\l\0\5\a\7\w\d\v\e\t\5\1\i\t\k\h\t\x\y\i\2\p\9\f\2\c\l\4\a\m\2\h\n\4\h\2\g\u\3\e\o\v\h\t\9\v\7\j\r\2\d\q\c\3\e\p\1\k\c\3\m\h\p\r\l\4\w\q\4\l\e\8\j\4\t\r\6\7\8\a\p\3\h\v\9\o\2\8\9\r\i\s\9\f\x\p\x\r\2\o\j\2\j\0\x\v\b\g\c\o\5\y\6\o\7\6\j\2\f\b\s\b\l\u\w\1\3\i\2\5\p\g\0\i\s\l\z\7\m\j\v\h\n\u\6\8\y\c\d\8\r\t\6\c\l\w\r\7\8\l\b\l\x\p\7\v\s\3\z\h\t\l\8\m\8\q\y\t\q\j\o\g\o\5\j\e\w\8\y\w\t\5\4\e\a\9\d\g\t\6\2\2\n\m\b\y\o\r\6\4\i\b\n\n\a\3\g\4\1\v\k\d\a\z\r\a\r\q\l\p\3\u\8\y\k\c\u\q\2\4\w\s\h\q\9\5\m\h\k\t\u\c\3\3\k\6\2\b\y\p\x\r\i\f\e\6\f\x\q\5\0\g\o\u\e\j\v\6\c\z\t\q\3\5\3\0\f\w\q\6\w\w\6\i\c\a\7\4\a\6\8\o\w\w\0\i\8 ]] 00:19:50.843 00:19:50.843 real 0m6.853s 00:19:50.843 user 0m5.296s 00:19:50.843 sys 0m0.959s 00:19:50.843 04:57:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.843 04:57:04 -- common/autotest_common.sh@10 -- # set +x 00:19:50.843 04:57:05 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:19:50.843 04:57:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:50.843 04:57:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:50.843 04:57:05 -- common/autotest_common.sh@10 -- # set +x 00:19:50.843 ************************************ 00:19:50.843 START TEST dd_flag_noatime 00:19:50.843 ************************************ 00:19:50.843 04:57:05 -- common/autotest_common.sh@1104 -- # noatime 00:19:50.843 04:57:05 -- dd/posix.sh@53 -- # local atime_if 00:19:50.843 04:57:05 -- dd/posix.sh@54 -- # local atime_of 00:19:50.843 04:57:05 -- dd/posix.sh@58 -- # gen_bytes 512 00:19:50.843 04:57:05 -- dd/common.sh@98 -- # xtrace_disable 00:19:50.843 04:57:05 -- common/autotest_common.sh@10 -- # set +x 00:19:50.843 04:57:05 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:50.843 04:57:05 -- dd/posix.sh@60 -- # atime_if=1715749023 00:19:50.843 04:57:05 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:50.843 04:57:05 -- dd/posix.sh@61 -- # atime_of=1715749024 00:19:50.843 04:57:05 -- dd/posix.sh@66 -- # sleep 1 00:19:52.220 04:57:06 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:52.220 [2024-05-15 04:57:06.201406] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:52.220 [2024-05-15 04:57:06.201555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59060 ] 00:19:52.220 [2024-05-15 04:57:06.355415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.479 [2024-05-15 04:57:06.583298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.421  Copying: 512/512 [B] (average 500 kBps) 00:19:54.421 00:19:54.421 04:57:08 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:54.421 04:57:08 -- dd/posix.sh@69 -- # (( atime_if == 1715749023 )) 00:19:54.421 04:57:08 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:54.421 04:57:08 -- dd/posix.sh@70 -- # (( atime_of == 1715749024 )) 00:19:54.421 04:57:08 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:54.421 [2024-05-15 04:57:08.516837] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:54.421 [2024-05-15 04:57:08.516994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59098 ] 00:19:54.680 [2024-05-15 04:57:08.670145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.680 [2024-05-15 04:57:08.885358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.624  Copying: 512/512 [B] (average 500 kBps) 00:19:56.624 00:19:56.624 04:57:10 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:56.624 ************************************ 00:19:56.624 END TEST dd_flag_noatime 00:19:56.624 ************************************ 00:19:56.624 04:57:10 -- dd/posix.sh@73 -- # (( atime_if < 1715749029 )) 00:19:56.624 00:19:56.624 real 0m5.615s 00:19:56.624 user 0m3.540s 00:19:56.624 sys 0m0.672s 00:19:56.624 04:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.624 04:57:10 -- common/autotest_common.sh@10 -- # set +x 00:19:56.624 04:57:10 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:19:56.624 04:57:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:56.624 04:57:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:56.624 04:57:10 -- common/autotest_common.sh@10 -- # set +x 00:19:56.624 ************************************ 00:19:56.624 START TEST dd_flags_misc 00:19:56.624 ************************************ 00:19:56.624 04:57:10 -- common/autotest_common.sh@1104 -- # io 00:19:56.624 04:57:10 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:19:56.624 04:57:10 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:19:56.624 04:57:10 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:19:56.624 04:57:10 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:19:56.624 04:57:10 -- dd/posix.sh@86 -- # gen_bytes 512 00:19:56.624 04:57:10 -- dd/common.sh@98 -- # xtrace_disable 00:19:56.624 04:57:10 -- common/autotest_common.sh@10 -- # set +x 00:19:56.624 04:57:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:56.624 04:57:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:19:56.882 [2024-05-15 04:57:10.857126] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:56.882 [2024-05-15 04:57:10.857284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59146 ] 00:19:56.882 [2024-05-15 04:57:11.017221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.141 [2024-05-15 04:57:11.232768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.084  Copying: 512/512 [B] (average 500 kBps) 00:19:59.084 00:19:59.085 04:57:13 -- dd/posix.sh@93 -- # [[ lw84eappcx0oxymg4552k050o7s3ei34zffhg59kp25xz3dwagf2cfa2ifj8octt5nikmxieentbyg1570v78al2ufls2hh7m8wl41oqr8zg98i5j7h9spvi5ndui34newnm3xbuna0hihq9l978gnpz2uarjksl9e1brc8o2k14cqmw4mmxhuduy58ychcx8b01vfsnrkduy68l4bn5yyl23et9oqzkdsuw3d8r1bihep9m96n3as33sza6h82urpdsg3jh2szkdl8ffq6mbfm52z2s3uwheult29kiuqfc4xym1k6erqkwgwbghrs4yp3dmy93g2up2qrh9f7rd3l4oy42l2ho196lhjsrmqt8oolerkxw732oeeo7x5j36lhv0iwqzb8lddpwmi1w88csedhci8wfpvftfqa9e96vghb97guufmqougq3hocbicla8n3hjvafzcvseniyoh9uovnt7gijarmgeawn8e2uqga2krcghak5a81gm9zy == \l\w\8\4\e\a\p\p\c\x\0\o\x\y\m\g\4\5\5\2\k\0\5\0\o\7\s\3\e\i\3\4\z\f\f\h\g\5\9\k\p\2\5\x\z\3\d\w\a\g\f\2\c\f\a\2\i\f\j\8\o\c\t\t\5\n\i\k\m\x\i\e\e\n\t\b\y\g\1\5\7\0\v\7\8\a\l\2\u\f\l\s\2\h\h\7\m\8\w\l\4\1\o\q\r\8\z\g\9\8\i\5\j\7\h\9\s\p\v\i\5\n\d\u\i\3\4\n\e\w\n\m\3\x\b\u\n\a\0\h\i\h\q\9\l\9\7\8\g\n\p\z\2\u\a\r\j\k\s\l\9\e\1\b\r\c\8\o\2\k\1\4\c\q\m\w\4\m\m\x\h\u\d\u\y\5\8\y\c\h\c\x\8\b\0\1\v\f\s\n\r\k\d\u\y\6\8\l\4\b\n\5\y\y\l\2\3\e\t\9\o\q\z\k\d\s\u\w\3\d\8\r\1\b\i\h\e\p\9\m\9\6\n\3\a\s\3\3\s\z\a\6\h\8\2\u\r\p\d\s\g\3\j\h\2\s\z\k\d\l\8\f\f\q\6\m\b\f\m\5\2\z\2\s\3\u\w\h\e\u\l\t\2\9\k\i\u\q\f\c\4\x\y\m\1\k\6\e\r\q\k\w\g\w\b\g\h\r\s\4\y\p\3\d\m\y\9\3\g\2\u\p\2\q\r\h\9\f\7\r\d\3\l\4\o\y\4\2\l\2\h\o\1\9\6\l\h\j\s\r\m\q\t\8\o\o\l\e\r\k\x\w\7\3\2\o\e\e\o\7\x\5\j\3\6\l\h\v\0\i\w\q\z\b\8\l\d\d\p\w\m\i\1\w\8\8\c\s\e\d\h\c\i\8\w\f\p\v\f\t\f\q\a\9\e\9\6\v\g\h\b\9\7\g\u\u\f\m\q\o\u\g\q\3\h\o\c\b\i\c\l\a\8\n\3\h\j\v\a\f\z\c\v\s\e\n\i\y\o\h\9\u\o\v\n\t\7\g\i\j\a\r\m\g\e\a\w\n\8\e\2\u\q\g\a\2\k\r\c\g\h\a\k\5\a\8\1\g\m\9\z\y ]] 00:19:59.085 04:57:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:59.085 04:57:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:19:59.085 [2024-05-15 04:57:13.162167] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:59.085 [2024-05-15 04:57:13.162321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:19:59.085 [2024-05-15 04:57:13.312484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.343 [2024-05-15 04:57:13.534219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.287  Copying: 512/512 [B] (average 500 kBps) 00:20:01.287 00:20:01.287 04:57:15 -- dd/posix.sh@93 -- # [[ lw84eappcx0oxymg4552k050o7s3ei34zffhg59kp25xz3dwagf2cfa2ifj8octt5nikmxieentbyg1570v78al2ufls2hh7m8wl41oqr8zg98i5j7h9spvi5ndui34newnm3xbuna0hihq9l978gnpz2uarjksl9e1brc8o2k14cqmw4mmxhuduy58ychcx8b01vfsnrkduy68l4bn5yyl23et9oqzkdsuw3d8r1bihep9m96n3as33sza6h82urpdsg3jh2szkdl8ffq6mbfm52z2s3uwheult29kiuqfc4xym1k6erqkwgwbghrs4yp3dmy93g2up2qrh9f7rd3l4oy42l2ho196lhjsrmqt8oolerkxw732oeeo7x5j36lhv0iwqzb8lddpwmi1w88csedhci8wfpvftfqa9e96vghb97guufmqougq3hocbicla8n3hjvafzcvseniyoh9uovnt7gijarmgeawn8e2uqga2krcghak5a81gm9zy == \l\w\8\4\e\a\p\p\c\x\0\o\x\y\m\g\4\5\5\2\k\0\5\0\o\7\s\3\e\i\3\4\z\f\f\h\g\5\9\k\p\2\5\x\z\3\d\w\a\g\f\2\c\f\a\2\i\f\j\8\o\c\t\t\5\n\i\k\m\x\i\e\e\n\t\b\y\g\1\5\7\0\v\7\8\a\l\2\u\f\l\s\2\h\h\7\m\8\w\l\4\1\o\q\r\8\z\g\9\8\i\5\j\7\h\9\s\p\v\i\5\n\d\u\i\3\4\n\e\w\n\m\3\x\b\u\n\a\0\h\i\h\q\9\l\9\7\8\g\n\p\z\2\u\a\r\j\k\s\l\9\e\1\b\r\c\8\o\2\k\1\4\c\q\m\w\4\m\m\x\h\u\d\u\y\5\8\y\c\h\c\x\8\b\0\1\v\f\s\n\r\k\d\u\y\6\8\l\4\b\n\5\y\y\l\2\3\e\t\9\o\q\z\k\d\s\u\w\3\d\8\r\1\b\i\h\e\p\9\m\9\6\n\3\a\s\3\3\s\z\a\6\h\8\2\u\r\p\d\s\g\3\j\h\2\s\z\k\d\l\8\f\f\q\6\m\b\f\m\5\2\z\2\s\3\u\w\h\e\u\l\t\2\9\k\i\u\q\f\c\4\x\y\m\1\k\6\e\r\q\k\w\g\w\b\g\h\r\s\4\y\p\3\d\m\y\9\3\g\2\u\p\2\q\r\h\9\f\7\r\d\3\l\4\o\y\4\2\l\2\h\o\1\9\6\l\h\j\s\r\m\q\t\8\o\o\l\e\r\k\x\w\7\3\2\o\e\e\o\7\x\5\j\3\6\l\h\v\0\i\w\q\z\b\8\l\d\d\p\w\m\i\1\w\8\8\c\s\e\d\h\c\i\8\w\f\p\v\f\t\f\q\a\9\e\9\6\v\g\h\b\9\7\g\u\u\f\m\q\o\u\g\q\3\h\o\c\b\i\c\l\a\8\n\3\h\j\v\a\f\z\c\v\s\e\n\i\y\o\h\9\u\o\v\n\t\7\g\i\j\a\r\m\g\e\a\w\n\8\e\2\u\q\g\a\2\k\r\c\g\h\a\k\5\a\8\1\g\m\9\z\y ]] 00:20:01.287 04:57:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:01.287 04:57:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:20:01.287 [2024-05-15 04:57:15.432510] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:01.287 [2024-05-15 04:57:15.432666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59210 ] 00:20:01.545 [2024-05-15 04:57:15.583296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.804 [2024-05-15 04:57:15.801874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.439  Copying: 512/512 [B] (average 166 kBps) 00:20:03.439 00:20:03.439 04:57:17 -- dd/posix.sh@93 -- # [[ lw84eappcx0oxymg4552k050o7s3ei34zffhg59kp25xz3dwagf2cfa2ifj8octt5nikmxieentbyg1570v78al2ufls2hh7m8wl41oqr8zg98i5j7h9spvi5ndui34newnm3xbuna0hihq9l978gnpz2uarjksl9e1brc8o2k14cqmw4mmxhuduy58ychcx8b01vfsnrkduy68l4bn5yyl23et9oqzkdsuw3d8r1bihep9m96n3as33sza6h82urpdsg3jh2szkdl8ffq6mbfm52z2s3uwheult29kiuqfc4xym1k6erqkwgwbghrs4yp3dmy93g2up2qrh9f7rd3l4oy42l2ho196lhjsrmqt8oolerkxw732oeeo7x5j36lhv0iwqzb8lddpwmi1w88csedhci8wfpvftfqa9e96vghb97guufmqougq3hocbicla8n3hjvafzcvseniyoh9uovnt7gijarmgeawn8e2uqga2krcghak5a81gm9zy == \l\w\8\4\e\a\p\p\c\x\0\o\x\y\m\g\4\5\5\2\k\0\5\0\o\7\s\3\e\i\3\4\z\f\f\h\g\5\9\k\p\2\5\x\z\3\d\w\a\g\f\2\c\f\a\2\i\f\j\8\o\c\t\t\5\n\i\k\m\x\i\e\e\n\t\b\y\g\1\5\7\0\v\7\8\a\l\2\u\f\l\s\2\h\h\7\m\8\w\l\4\1\o\q\r\8\z\g\9\8\i\5\j\7\h\9\s\p\v\i\5\n\d\u\i\3\4\n\e\w\n\m\3\x\b\u\n\a\0\h\i\h\q\9\l\9\7\8\g\n\p\z\2\u\a\r\j\k\s\l\9\e\1\b\r\c\8\o\2\k\1\4\c\q\m\w\4\m\m\x\h\u\d\u\y\5\8\y\c\h\c\x\8\b\0\1\v\f\s\n\r\k\d\u\y\6\8\l\4\b\n\5\y\y\l\2\3\e\t\9\o\q\z\k\d\s\u\w\3\d\8\r\1\b\i\h\e\p\9\m\9\6\n\3\a\s\3\3\s\z\a\6\h\8\2\u\r\p\d\s\g\3\j\h\2\s\z\k\d\l\8\f\f\q\6\m\b\f\m\5\2\z\2\s\3\u\w\h\e\u\l\t\2\9\k\i\u\q\f\c\4\x\y\m\1\k\6\e\r\q\k\w\g\w\b\g\h\r\s\4\y\p\3\d\m\y\9\3\g\2\u\p\2\q\r\h\9\f\7\r\d\3\l\4\o\y\4\2\l\2\h\o\1\9\6\l\h\j\s\r\m\q\t\8\o\o\l\e\r\k\x\w\7\3\2\o\e\e\o\7\x\5\j\3\6\l\h\v\0\i\w\q\z\b\8\l\d\d\p\w\m\i\1\w\8\8\c\s\e\d\h\c\i\8\w\f\p\v\f\t\f\q\a\9\e\9\6\v\g\h\b\9\7\g\u\u\f\m\q\o\u\g\q\3\h\o\c\b\i\c\l\a\8\n\3\h\j\v\a\f\z\c\v\s\e\n\i\y\o\h\9\u\o\v\n\t\7\g\i\j\a\r\m\g\e\a\w\n\8\e\2\u\q\g\a\2\k\r\c\g\h\a\k\5\a\8\1\g\m\9\z\y ]] 00:20:03.439 04:57:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:03.439 04:57:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:20:03.697 [2024-05-15 04:57:17.737702] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:03.697 [2024-05-15 04:57:17.737869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59246 ] 00:20:03.697 [2024-05-15 04:57:17.889546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.955 [2024-05-15 04:57:18.120782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.894  Copying: 512/512 [B] (average 166 kBps) 00:20:05.894 00:20:05.894 04:57:19 -- dd/posix.sh@93 -- # [[ lw84eappcx0oxymg4552k050o7s3ei34zffhg59kp25xz3dwagf2cfa2ifj8octt5nikmxieentbyg1570v78al2ufls2hh7m8wl41oqr8zg98i5j7h9spvi5ndui34newnm3xbuna0hihq9l978gnpz2uarjksl9e1brc8o2k14cqmw4mmxhuduy58ychcx8b01vfsnrkduy68l4bn5yyl23et9oqzkdsuw3d8r1bihep9m96n3as33sza6h82urpdsg3jh2szkdl8ffq6mbfm52z2s3uwheult29kiuqfc4xym1k6erqkwgwbghrs4yp3dmy93g2up2qrh9f7rd3l4oy42l2ho196lhjsrmqt8oolerkxw732oeeo7x5j36lhv0iwqzb8lddpwmi1w88csedhci8wfpvftfqa9e96vghb97guufmqougq3hocbicla8n3hjvafzcvseniyoh9uovnt7gijarmgeawn8e2uqga2krcghak5a81gm9zy == \l\w\8\4\e\a\p\p\c\x\0\o\x\y\m\g\4\5\5\2\k\0\5\0\o\7\s\3\e\i\3\4\z\f\f\h\g\5\9\k\p\2\5\x\z\3\d\w\a\g\f\2\c\f\a\2\i\f\j\8\o\c\t\t\5\n\i\k\m\x\i\e\e\n\t\b\y\g\1\5\7\0\v\7\8\a\l\2\u\f\l\s\2\h\h\7\m\8\w\l\4\1\o\q\r\8\z\g\9\8\i\5\j\7\h\9\s\p\v\i\5\n\d\u\i\3\4\n\e\w\n\m\3\x\b\u\n\a\0\h\i\h\q\9\l\9\7\8\g\n\p\z\2\u\a\r\j\k\s\l\9\e\1\b\r\c\8\o\2\k\1\4\c\q\m\w\4\m\m\x\h\u\d\u\y\5\8\y\c\h\c\x\8\b\0\1\v\f\s\n\r\k\d\u\y\6\8\l\4\b\n\5\y\y\l\2\3\e\t\9\o\q\z\k\d\s\u\w\3\d\8\r\1\b\i\h\e\p\9\m\9\6\n\3\a\s\3\3\s\z\a\6\h\8\2\u\r\p\d\s\g\3\j\h\2\s\z\k\d\l\8\f\f\q\6\m\b\f\m\5\2\z\2\s\3\u\w\h\e\u\l\t\2\9\k\i\u\q\f\c\4\x\y\m\1\k\6\e\r\q\k\w\g\w\b\g\h\r\s\4\y\p\3\d\m\y\9\3\g\2\u\p\2\q\r\h\9\f\7\r\d\3\l\4\o\y\4\2\l\2\h\o\1\9\6\l\h\j\s\r\m\q\t\8\o\o\l\e\r\k\x\w\7\3\2\o\e\e\o\7\x\5\j\3\6\l\h\v\0\i\w\q\z\b\8\l\d\d\p\w\m\i\1\w\8\8\c\s\e\d\h\c\i\8\w\f\p\v\f\t\f\q\a\9\e\9\6\v\g\h\b\9\7\g\u\u\f\m\q\o\u\g\q\3\h\o\c\b\i\c\l\a\8\n\3\h\j\v\a\f\z\c\v\s\e\n\i\y\o\h\9\u\o\v\n\t\7\g\i\j\a\r\m\g\e\a\w\n\8\e\2\u\q\g\a\2\k\r\c\g\h\a\k\5\a\8\1\g\m\9\z\y ]] 00:20:05.894 04:57:19 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:20:05.894 04:57:19 -- dd/posix.sh@86 -- # gen_bytes 512 00:20:05.894 04:57:19 -- dd/common.sh@98 -- # xtrace_disable 00:20:05.894 04:57:19 -- common/autotest_common.sh@10 -- # set +x 00:20:05.894 04:57:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:05.894 04:57:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:20:05.894 [2024-05-15 04:57:20.041257] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:05.894 [2024-05-15 04:57:20.041412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:20:06.152 [2024-05-15 04:57:20.197434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.412 [2024-05-15 04:57:20.422445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.048  Copying: 512/512 [B] (average 500 kBps) 00:20:08.048 00:20:08.048 04:57:22 -- dd/posix.sh@93 -- # [[ wzj8cjeaftd86qplkd1ob9xkfigh5zfvx14xzrbduyuakcar0cpzdvf75xeaui3k0n5g39m9agahqbemaxnpo3x46pp18m22typqzcyslbsyyr9w9wo6k6p3jdewsaqnsfzt4sq4zkaotdho36uz94dcljwl7xop7l35wzg2vy3ff0junou779jtatf34pjidwemu79gpf56xkm8khed8q96qde0svaodnutcgm7293wlxoqgqbtetpy53p1xkeepo00ly4q85kjyqxjyddiuclumwlr1j4172wd8povmfrr1o51aq2465udfxeeotze206yghsop2z25kfnxoqv3ndfaobmn6nhx93b9t8imeizza39g97k8wanle9wezpw4vh8jncc8jslba6ykx8g4a8wmsybj78d5lcz3r7ydp2p644ibrqak74mg0lkkwzxsci58hltner8gfoux64jcoztg54o0hskd2zc8pqkfmgwkf4wypk8p63ocpnfpaab == \w\z\j\8\c\j\e\a\f\t\d\8\6\q\p\l\k\d\1\o\b\9\x\k\f\i\g\h\5\z\f\v\x\1\4\x\z\r\b\d\u\y\u\a\k\c\a\r\0\c\p\z\d\v\f\7\5\x\e\a\u\i\3\k\0\n\5\g\3\9\m\9\a\g\a\h\q\b\e\m\a\x\n\p\o\3\x\4\6\p\p\1\8\m\2\2\t\y\p\q\z\c\y\s\l\b\s\y\y\r\9\w\9\w\o\6\k\6\p\3\j\d\e\w\s\a\q\n\s\f\z\t\4\s\q\4\z\k\a\o\t\d\h\o\3\6\u\z\9\4\d\c\l\j\w\l\7\x\o\p\7\l\3\5\w\z\g\2\v\y\3\f\f\0\j\u\n\o\u\7\7\9\j\t\a\t\f\3\4\p\j\i\d\w\e\m\u\7\9\g\p\f\5\6\x\k\m\8\k\h\e\d\8\q\9\6\q\d\e\0\s\v\a\o\d\n\u\t\c\g\m\7\2\9\3\w\l\x\o\q\g\q\b\t\e\t\p\y\5\3\p\1\x\k\e\e\p\o\0\0\l\y\4\q\8\5\k\j\y\q\x\j\y\d\d\i\u\c\l\u\m\w\l\r\1\j\4\1\7\2\w\d\8\p\o\v\m\f\r\r\1\o\5\1\a\q\2\4\6\5\u\d\f\x\e\e\o\t\z\e\2\0\6\y\g\h\s\o\p\2\z\2\5\k\f\n\x\o\q\v\3\n\d\f\a\o\b\m\n\6\n\h\x\9\3\b\9\t\8\i\m\e\i\z\z\a\3\9\g\9\7\k\8\w\a\n\l\e\9\w\e\z\p\w\4\v\h\8\j\n\c\c\8\j\s\l\b\a\6\y\k\x\8\g\4\a\8\w\m\s\y\b\j\7\8\d\5\l\c\z\3\r\7\y\d\p\2\p\6\4\4\i\b\r\q\a\k\7\4\m\g\0\l\k\k\w\z\x\s\c\i\5\8\h\l\t\n\e\r\8\g\f\o\u\x\6\4\j\c\o\z\t\g\5\4\o\0\h\s\k\d\2\z\c\8\p\q\k\f\m\g\w\k\f\4\w\y\p\k\8\p\6\3\o\c\p\n\f\p\a\a\b ]] 00:20:08.048 04:57:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:08.048 04:57:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:20:08.306 [2024-05-15 04:57:22.348836] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:08.306 [2024-05-15 04:57:22.348993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59300 ] 00:20:08.306 [2024-05-15 04:57:22.501444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.564 [2024-05-15 04:57:22.739847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.503  Copying: 512/512 [B] (average 500 kBps) 00:20:10.503 00:20:10.503 04:57:24 -- dd/posix.sh@93 -- # [[ wzj8cjeaftd86qplkd1ob9xkfigh5zfvx14xzrbduyuakcar0cpzdvf75xeaui3k0n5g39m9agahqbemaxnpo3x46pp18m22typqzcyslbsyyr9w9wo6k6p3jdewsaqnsfzt4sq4zkaotdho36uz94dcljwl7xop7l35wzg2vy3ff0junou779jtatf34pjidwemu79gpf56xkm8khed8q96qde0svaodnutcgm7293wlxoqgqbtetpy53p1xkeepo00ly4q85kjyqxjyddiuclumwlr1j4172wd8povmfrr1o51aq2465udfxeeotze206yghsop2z25kfnxoqv3ndfaobmn6nhx93b9t8imeizza39g97k8wanle9wezpw4vh8jncc8jslba6ykx8g4a8wmsybj78d5lcz3r7ydp2p644ibrqak74mg0lkkwzxsci58hltner8gfoux64jcoztg54o0hskd2zc8pqkfmgwkf4wypk8p63ocpnfpaab == \w\z\j\8\c\j\e\a\f\t\d\8\6\q\p\l\k\d\1\o\b\9\x\k\f\i\g\h\5\z\f\v\x\1\4\x\z\r\b\d\u\y\u\a\k\c\a\r\0\c\p\z\d\v\f\7\5\x\e\a\u\i\3\k\0\n\5\g\3\9\m\9\a\g\a\h\q\b\e\m\a\x\n\p\o\3\x\4\6\p\p\1\8\m\2\2\t\y\p\q\z\c\y\s\l\b\s\y\y\r\9\w\9\w\o\6\k\6\p\3\j\d\e\w\s\a\q\n\s\f\z\t\4\s\q\4\z\k\a\o\t\d\h\o\3\6\u\z\9\4\d\c\l\j\w\l\7\x\o\p\7\l\3\5\w\z\g\2\v\y\3\f\f\0\j\u\n\o\u\7\7\9\j\t\a\t\f\3\4\p\j\i\d\w\e\m\u\7\9\g\p\f\5\6\x\k\m\8\k\h\e\d\8\q\9\6\q\d\e\0\s\v\a\o\d\n\u\t\c\g\m\7\2\9\3\w\l\x\o\q\g\q\b\t\e\t\p\y\5\3\p\1\x\k\e\e\p\o\0\0\l\y\4\q\8\5\k\j\y\q\x\j\y\d\d\i\u\c\l\u\m\w\l\r\1\j\4\1\7\2\w\d\8\p\o\v\m\f\r\r\1\o\5\1\a\q\2\4\6\5\u\d\f\x\e\e\o\t\z\e\2\0\6\y\g\h\s\o\p\2\z\2\5\k\f\n\x\o\q\v\3\n\d\f\a\o\b\m\n\6\n\h\x\9\3\b\9\t\8\i\m\e\i\z\z\a\3\9\g\9\7\k\8\w\a\n\l\e\9\w\e\z\p\w\4\v\h\8\j\n\c\c\8\j\s\l\b\a\6\y\k\x\8\g\4\a\8\w\m\s\y\b\j\7\8\d\5\l\c\z\3\r\7\y\d\p\2\p\6\4\4\i\b\r\q\a\k\7\4\m\g\0\l\k\k\w\z\x\s\c\i\5\8\h\l\t\n\e\r\8\g\f\o\u\x\6\4\j\c\o\z\t\g\5\4\o\0\h\s\k\d\2\z\c\8\p\q\k\f\m\g\w\k\f\4\w\y\p\k\8\p\6\3\o\c\p\n\f\p\a\a\b ]] 00:20:10.503 04:57:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:10.503 04:57:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:20:10.503 [2024-05-15 04:57:24.660652] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:10.503 [2024-05-15 04:57:24.661013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59334 ] 00:20:10.766 [2024-05-15 04:57:24.812380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.072 [2024-05-15 04:57:25.055114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.712  Copying: 512/512 [B] (average 166 kBps) 00:20:12.712 00:20:12.712 04:57:26 -- dd/posix.sh@93 -- # [[ wzj8cjeaftd86qplkd1ob9xkfigh5zfvx14xzrbduyuakcar0cpzdvf75xeaui3k0n5g39m9agahqbemaxnpo3x46pp18m22typqzcyslbsyyr9w9wo6k6p3jdewsaqnsfzt4sq4zkaotdho36uz94dcljwl7xop7l35wzg2vy3ff0junou779jtatf34pjidwemu79gpf56xkm8khed8q96qde0svaodnutcgm7293wlxoqgqbtetpy53p1xkeepo00ly4q85kjyqxjyddiuclumwlr1j4172wd8povmfrr1o51aq2465udfxeeotze206yghsop2z25kfnxoqv3ndfaobmn6nhx93b9t8imeizza39g97k8wanle9wezpw4vh8jncc8jslba6ykx8g4a8wmsybj78d5lcz3r7ydp2p644ibrqak74mg0lkkwzxsci58hltner8gfoux64jcoztg54o0hskd2zc8pqkfmgwkf4wypk8p63ocpnfpaab == \w\z\j\8\c\j\e\a\f\t\d\8\6\q\p\l\k\d\1\o\b\9\x\k\f\i\g\h\5\z\f\v\x\1\4\x\z\r\b\d\u\y\u\a\k\c\a\r\0\c\p\z\d\v\f\7\5\x\e\a\u\i\3\k\0\n\5\g\3\9\m\9\a\g\a\h\q\b\e\m\a\x\n\p\o\3\x\4\6\p\p\1\8\m\2\2\t\y\p\q\z\c\y\s\l\b\s\y\y\r\9\w\9\w\o\6\k\6\p\3\j\d\e\w\s\a\q\n\s\f\z\t\4\s\q\4\z\k\a\o\t\d\h\o\3\6\u\z\9\4\d\c\l\j\w\l\7\x\o\p\7\l\3\5\w\z\g\2\v\y\3\f\f\0\j\u\n\o\u\7\7\9\j\t\a\t\f\3\4\p\j\i\d\w\e\m\u\7\9\g\p\f\5\6\x\k\m\8\k\h\e\d\8\q\9\6\q\d\e\0\s\v\a\o\d\n\u\t\c\g\m\7\2\9\3\w\l\x\o\q\g\q\b\t\e\t\p\y\5\3\p\1\x\k\e\e\p\o\0\0\l\y\4\q\8\5\k\j\y\q\x\j\y\d\d\i\u\c\l\u\m\w\l\r\1\j\4\1\7\2\w\d\8\p\o\v\m\f\r\r\1\o\5\1\a\q\2\4\6\5\u\d\f\x\e\e\o\t\z\e\2\0\6\y\g\h\s\o\p\2\z\2\5\k\f\n\x\o\q\v\3\n\d\f\a\o\b\m\n\6\n\h\x\9\3\b\9\t\8\i\m\e\i\z\z\a\3\9\g\9\7\k\8\w\a\n\l\e\9\w\e\z\p\w\4\v\h\8\j\n\c\c\8\j\s\l\b\a\6\y\k\x\8\g\4\a\8\w\m\s\y\b\j\7\8\d\5\l\c\z\3\r\7\y\d\p\2\p\6\4\4\i\b\r\q\a\k\7\4\m\g\0\l\k\k\w\z\x\s\c\i\5\8\h\l\t\n\e\r\8\g\f\o\u\x\6\4\j\c\o\z\t\g\5\4\o\0\h\s\k\d\2\z\c\8\p\q\k\f\m\g\w\k\f\4\w\y\p\k\8\p\6\3\o\c\p\n\f\p\a\a\b ]] 00:20:12.712 04:57:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:12.712 04:57:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:20:12.968 [2024-05-15 04:57:26.972257] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:12.968 [2024-05-15 04:57:26.972412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59370 ] 00:20:12.968 [2024-05-15 04:57:27.126191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.226 [2024-05-15 04:57:27.358366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.166  Copying: 512/512 [B] (average 250 kBps) 00:20:15.166 00:20:15.166 ************************************ 00:20:15.166 END TEST dd_flags_misc 00:20:15.166 ************************************ 00:20:15.166 04:57:29 -- dd/posix.sh@93 -- # [[ wzj8cjeaftd86qplkd1ob9xkfigh5zfvx14xzrbduyuakcar0cpzdvf75xeaui3k0n5g39m9agahqbemaxnpo3x46pp18m22typqzcyslbsyyr9w9wo6k6p3jdewsaqnsfzt4sq4zkaotdho36uz94dcljwl7xop7l35wzg2vy3ff0junou779jtatf34pjidwemu79gpf56xkm8khed8q96qde0svaodnutcgm7293wlxoqgqbtetpy53p1xkeepo00ly4q85kjyqxjyddiuclumwlr1j4172wd8povmfrr1o51aq2465udfxeeotze206yghsop2z25kfnxoqv3ndfaobmn6nhx93b9t8imeizza39g97k8wanle9wezpw4vh8jncc8jslba6ykx8g4a8wmsybj78d5lcz3r7ydp2p644ibrqak74mg0lkkwzxsci58hltner8gfoux64jcoztg54o0hskd2zc8pqkfmgwkf4wypk8p63ocpnfpaab == \w\z\j\8\c\j\e\a\f\t\d\8\6\q\p\l\k\d\1\o\b\9\x\k\f\i\g\h\5\z\f\v\x\1\4\x\z\r\b\d\u\y\u\a\k\c\a\r\0\c\p\z\d\v\f\7\5\x\e\a\u\i\3\k\0\n\5\g\3\9\m\9\a\g\a\h\q\b\e\m\a\x\n\p\o\3\x\4\6\p\p\1\8\m\2\2\t\y\p\q\z\c\y\s\l\b\s\y\y\r\9\w\9\w\o\6\k\6\p\3\j\d\e\w\s\a\q\n\s\f\z\t\4\s\q\4\z\k\a\o\t\d\h\o\3\6\u\z\9\4\d\c\l\j\w\l\7\x\o\p\7\l\3\5\w\z\g\2\v\y\3\f\f\0\j\u\n\o\u\7\7\9\j\t\a\t\f\3\4\p\j\i\d\w\e\m\u\7\9\g\p\f\5\6\x\k\m\8\k\h\e\d\8\q\9\6\q\d\e\0\s\v\a\o\d\n\u\t\c\g\m\7\2\9\3\w\l\x\o\q\g\q\b\t\e\t\p\y\5\3\p\1\x\k\e\e\p\o\0\0\l\y\4\q\8\5\k\j\y\q\x\j\y\d\d\i\u\c\l\u\m\w\l\r\1\j\4\1\7\2\w\d\8\p\o\v\m\f\r\r\1\o\5\1\a\q\2\4\6\5\u\d\f\x\e\e\o\t\z\e\2\0\6\y\g\h\s\o\p\2\z\2\5\k\f\n\x\o\q\v\3\n\d\f\a\o\b\m\n\6\n\h\x\9\3\b\9\t\8\i\m\e\i\z\z\a\3\9\g\9\7\k\8\w\a\n\l\e\9\w\e\z\p\w\4\v\h\8\j\n\c\c\8\j\s\l\b\a\6\y\k\x\8\g\4\a\8\w\m\s\y\b\j\7\8\d\5\l\c\z\3\r\7\y\d\p\2\p\6\4\4\i\b\r\q\a\k\7\4\m\g\0\l\k\k\w\z\x\s\c\i\5\8\h\l\t\n\e\r\8\g\f\o\u\x\6\4\j\c\o\z\t\g\5\4\o\0\h\s\k\d\2\z\c\8\p\q\k\f\m\g\w\k\f\4\w\y\p\k\8\p\6\3\o\c\p\n\f\p\a\a\b ]] 00:20:15.166 00:20:15.166 real 0m18.416s 00:20:15.166 user 0m14.219s 00:20:15.166 sys 0m2.554s 00:20:15.166 04:57:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.166 04:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:15.166 04:57:29 -- dd/posix.sh@131 -- # tests_forced_aio 00:20:15.166 04:57:29 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:20:15.166 * Second test run, using AIO 00:20:15.166 04:57:29 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:20:15.166 04:57:29 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:20:15.166 04:57:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:15.166 04:57:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:15.166 04:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:15.166 ************************************ 00:20:15.166 START TEST dd_flag_append_forced_aio 00:20:15.166 ************************************ 00:20:15.166 04:57:29 -- common/autotest_common.sh@1104 -- # append 00:20:15.166 04:57:29 -- dd/posix.sh@16 -- # local dump0 00:20:15.166 04:57:29 -- dd/posix.sh@17 -- # local dump1 00:20:15.166 04:57:29 -- dd/posix.sh@19 -- # gen_bytes 32 00:20:15.166 04:57:29 -- dd/common.sh@98 -- # xtrace_disable 00:20:15.166 04:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:15.166 04:57:29 -- dd/posix.sh@19 -- # dump0=1764twz0a8wldcam1nlkl0rl3cbnlekl 00:20:15.166 04:57:29 -- dd/posix.sh@20 -- # gen_bytes 32 00:20:15.166 04:57:29 -- dd/common.sh@98 -- # xtrace_disable 00:20:15.166 04:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:15.166 04:57:29 -- dd/posix.sh@20 -- # dump1=1ox4ptnq0zkbrqv87gyiwxwq9575z9yz 00:20:15.166 04:57:29 -- dd/posix.sh@22 -- # printf %s 1764twz0a8wldcam1nlkl0rl3cbnlekl 00:20:15.166 04:57:29 -- dd/posix.sh@23 -- # printf %s 1ox4ptnq0zkbrqv87gyiwxwq9575z9yz 00:20:15.166 04:57:29 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:20:15.166 [2024-05-15 04:57:29.332297] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:15.166 [2024-05-15 04:57:29.332441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59420 ] 00:20:15.424 [2024-05-15 04:57:29.492197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.682 [2024-05-15 04:57:29.721963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.313  Copying: 32/32 [B] (average 31 kBps) 00:20:17.313 00:20:17.313 ************************************ 00:20:17.313 END TEST dd_flag_append_forced_aio 00:20:17.313 ************************************ 00:20:17.313 04:57:31 -- dd/posix.sh@27 -- # [[ 1ox4ptnq0zkbrqv87gyiwxwq9575z9yz1764twz0a8wldcam1nlkl0rl3cbnlekl == \1\o\x\4\p\t\n\q\0\z\k\b\r\q\v\8\7\g\y\i\w\x\w\q\9\5\7\5\z\9\y\z\1\7\6\4\t\w\z\0\a\8\w\l\d\c\a\m\1\n\l\k\l\0\r\l\3\c\b\n\l\e\k\l ]] 00:20:17.313 00:20:17.313 real 0m2.305s 00:20:17.313 user 0m1.767s 00:20:17.313 sys 0m0.331s 00:20:17.313 04:57:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.313 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:20:17.313 04:57:31 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:20:17.313 04:57:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:17.313 04:57:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:17.313 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:20:17.313 ************************************ 00:20:17.313 START TEST dd_flag_directory_forced_aio 00:20:17.313 ************************************ 00:20:17.313 04:57:31 -- common/autotest_common.sh@1104 -- # directory 00:20:17.313 04:57:31 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:17.313 04:57:31 -- common/autotest_common.sh@640 -- # local es=0 00:20:17.313 04:57:31 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:17.313 04:57:31 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:17.313 04:57:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.313 04:57:31 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:17.313 04:57:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.313 04:57:31 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:17.313 04:57:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.313 04:57:31 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:17.313 04:57:31 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:17.313 04:57:31 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:17.571 [2024-05-15 04:57:31.681838] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:17.571 [2024-05-15 04:57:31.681998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59473 ] 00:20:17.829 [2024-05-15 04:57:31.832061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.829 [2024-05-15 04:57:32.054155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.395 [2024-05-15 04:57:32.485165] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:20:18.395 [2024-05-15 04:57:32.485240] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:20:18.395 [2024-05-15 04:57:32.485283] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:19.330 [2024-05-15 04:57:33.368846] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:20:19.587 04:57:33 -- common/autotest_common.sh@643 -- # es=236 00:20:19.587 04:57:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:19.587 04:57:33 -- common/autotest_common.sh@652 -- # es=108 00:20:19.587 04:57:33 -- common/autotest_common.sh@653 -- # case "$es" in 00:20:19.587 04:57:33 -- common/autotest_common.sh@660 -- # es=1 00:20:19.587 04:57:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:19.587 04:57:33 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:20:19.587 04:57:33 -- common/autotest_common.sh@640 -- # local es=0 00:20:19.588 04:57:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:20:19.588 04:57:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:19.588 04:57:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.588 04:57:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:19.588 04:57:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.588 04:57:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:19.588 04:57:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.588 04:57:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:19.588 04:57:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:19.588 04:57:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:20:19.845 [2024-05-15 04:57:33.944574] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:19.845 [2024-05-15 04:57:33.944889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59508 ] 00:20:20.102 [2024-05-15 04:57:34.098546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.102 [2024-05-15 04:57:34.322933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.667 [2024-05-15 04:57:34.758925] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:20:20.667 [2024-05-15 04:57:34.758998] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:20:20.667 [2024-05-15 04:57:34.759041] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:21.600 [2024-05-15 04:57:35.658550] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:20:21.859 ************************************ 00:20:21.859 END TEST dd_flag_directory_forced_aio 00:20:21.859 ************************************ 00:20:21.859 04:57:36 -- common/autotest_common.sh@643 -- # es=236 00:20:21.859 04:57:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:21.859 04:57:36 -- common/autotest_common.sh@652 -- # es=108 00:20:21.859 04:57:36 -- common/autotest_common.sh@653 -- # case "$es" in 00:20:21.859 04:57:36 -- common/autotest_common.sh@660 -- # es=1 00:20:21.859 04:57:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:21.859 00:20:21.859 real 0m4.549s 00:20:21.859 user 0m3.530s 00:20:21.859 sys 0m0.625s 00:20:21.859 04:57:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.859 04:57:36 -- common/autotest_common.sh@10 -- # set +x 00:20:22.117 04:57:36 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:20:22.117 04:57:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:22.117 04:57:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:22.117 04:57:36 -- common/autotest_common.sh@10 -- # set +x 00:20:22.117 ************************************ 00:20:22.117 START TEST dd_flag_nofollow_forced_aio 00:20:22.117 ************************************ 00:20:22.117 04:57:36 -- common/autotest_common.sh@1104 -- # nofollow 00:20:22.117 04:57:36 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:20:22.117 04:57:36 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:20:22.117 04:57:36 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:20:22.117 04:57:36 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:20:22.117 04:57:36 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:22.117 04:57:36 -- common/autotest_common.sh@640 -- # local es=0 00:20:22.117 04:57:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:22.117 04:57:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:22.117 04:57:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:22.117 04:57:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:22.117 04:57:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:22.117 04:57:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:22.117 04:57:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:22.117 04:57:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:22.117 04:57:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:22.117 04:57:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:22.117 [2024-05-15 04:57:36.297537] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:22.117 [2024-05-15 04:57:36.297690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59565 ] 00:20:22.376 [2024-05-15 04:57:36.447532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.634 [2024-05-15 04:57:36.669789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.892 [2024-05-15 04:57:37.115710] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:20:22.892 [2024-05-15 04:57:37.115793] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:20:22.892 [2024-05-15 04:57:37.115819] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:23.826 [2024-05-15 04:57:37.995868] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:20:24.392 04:57:38 -- common/autotest_common.sh@643 -- # es=216 00:20:24.392 04:57:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:24.392 04:57:38 -- common/autotest_common.sh@652 -- # es=88 00:20:24.392 04:57:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:20:24.392 04:57:38 -- common/autotest_common.sh@660 -- # es=1 00:20:24.392 04:57:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:24.392 04:57:38 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:20:24.392 04:57:38 -- common/autotest_common.sh@640 -- # local es=0 00:20:24.392 04:57:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:20:24.392 04:57:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:24.392 04:57:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.392 04:57:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:24.392 04:57:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.392 04:57:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:24.392 04:57:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.392 04:57:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:24.392 04:57:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:24.392 04:57:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:20:24.392 [2024-05-15 04:57:38.562966] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:24.392 [2024-05-15 04:57:38.563137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59597 ] 00:20:24.650 [2024-05-15 04:57:38.718599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.909 [2024-05-15 04:57:38.939452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.167 [2024-05-15 04:57:39.364972] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:20:25.167 [2024-05-15 04:57:39.365055] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:20:25.167 [2024-05-15 04:57:39.365099] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:26.101 [2024-05-15 04:57:40.270465] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:20:26.667 04:57:40 -- common/autotest_common.sh@643 -- # es=216 00:20:26.667 04:57:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:26.667 04:57:40 -- common/autotest_common.sh@652 -- # es=88 00:20:26.667 04:57:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:20:26.667 04:57:40 -- common/autotest_common.sh@660 -- # es=1 00:20:26.667 04:57:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:26.667 04:57:40 -- dd/posix.sh@46 -- # gen_bytes 512 00:20:26.667 04:57:40 -- dd/common.sh@98 -- # xtrace_disable 00:20:26.667 04:57:40 -- common/autotest_common.sh@10 -- # set +x 00:20:26.667 04:57:40 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:26.667 [2024-05-15 04:57:40.840943] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:26.667 [2024-05-15 04:57:40.841103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59620 ] 00:20:26.926 [2024-05-15 04:57:40.994456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.183 [2024-05-15 04:57:41.220700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.816  Copying: 512/512 [B] (average 500 kBps) 00:20:28.816 00:20:28.816 ************************************ 00:20:28.816 END TEST dd_flag_nofollow_forced_aio 00:20:28.816 ************************************ 00:20:28.816 04:57:42 -- dd/posix.sh@49 -- # [[ qxlkx2jign6azqgopbrmn6qro0hpmbfu4wszrbyecxpk1k432ln9ndn197vybq1yknyiw1ztalsw0litiocy93lhbk95wvja9pge2t7edpwf3t41yjj3wd4sztfzg9nst8j5m6mf4ia75w7rexn8uy4apnq23a5qf6juomttp071y4xcqlo6711smha419k9obvloc2upb93yzsaidyc762bl92xuhzrdczpbegk0cxbnq3ugjblkpuqs7cj832r4pton4jcn52wmbt9u3xtl2tmoyrs07sy6h5yje7zjg617ue2p0ltp6q7pv7yx8g5vclmazrwoapazaujpmwow57f7gr3xigz1szc4503eqp5nkjig6vfhukdpnhbkfm8livstekichy9mq515orkuubr986wm43h4rkk2lkx5cu831yu5f45d8e40bohwy7n7mgzbsgy1b9f7eesoro7g9j63awh6g5dsaf0k42ziwg9lsf3savndfedpk0asfu2 == \q\x\l\k\x\2\j\i\g\n\6\a\z\q\g\o\p\b\r\m\n\6\q\r\o\0\h\p\m\b\f\u\4\w\s\z\r\b\y\e\c\x\p\k\1\k\4\3\2\l\n\9\n\d\n\1\9\7\v\y\b\q\1\y\k\n\y\i\w\1\z\t\a\l\s\w\0\l\i\t\i\o\c\y\9\3\l\h\b\k\9\5\w\v\j\a\9\p\g\e\2\t\7\e\d\p\w\f\3\t\4\1\y\j\j\3\w\d\4\s\z\t\f\z\g\9\n\s\t\8\j\5\m\6\m\f\4\i\a\7\5\w\7\r\e\x\n\8\u\y\4\a\p\n\q\2\3\a\5\q\f\6\j\u\o\m\t\t\p\0\7\1\y\4\x\c\q\l\o\6\7\1\1\s\m\h\a\4\1\9\k\9\o\b\v\l\o\c\2\u\p\b\9\3\y\z\s\a\i\d\y\c\7\6\2\b\l\9\2\x\u\h\z\r\d\c\z\p\b\e\g\k\0\c\x\b\n\q\3\u\g\j\b\l\k\p\u\q\s\7\c\j\8\3\2\r\4\p\t\o\n\4\j\c\n\5\2\w\m\b\t\9\u\3\x\t\l\2\t\m\o\y\r\s\0\7\s\y\6\h\5\y\j\e\7\z\j\g\6\1\7\u\e\2\p\0\l\t\p\6\q\7\p\v\7\y\x\8\g\5\v\c\l\m\a\z\r\w\o\a\p\a\z\a\u\j\p\m\w\o\w\5\7\f\7\g\r\3\x\i\g\z\1\s\z\c\4\5\0\3\e\q\p\5\n\k\j\i\g\6\v\f\h\u\k\d\p\n\h\b\k\f\m\8\l\i\v\s\t\e\k\i\c\h\y\9\m\q\5\1\5\o\r\k\u\u\b\r\9\8\6\w\m\4\3\h\4\r\k\k\2\l\k\x\5\c\u\8\3\1\y\u\5\f\4\5\d\8\e\4\0\b\o\h\w\y\7\n\7\m\g\z\b\s\g\y\1\b\9\f\7\e\e\s\o\r\o\7\g\9\j\6\3\a\w\h\6\g\5\d\s\a\f\0\k\4\2\z\i\w\g\9\l\s\f\3\s\a\v\n\d\f\e\d\p\k\0\a\s\f\u\2 ]] 00:20:28.816 00:20:28.816 real 0m6.829s 00:20:28.816 user 0m5.273s 00:20:28.816 sys 0m0.960s 00:20:28.816 04:57:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.816 04:57:42 -- common/autotest_common.sh@10 -- # set +x 00:20:28.816 04:57:43 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:20:28.816 04:57:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:28.816 04:57:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:28.816 04:57:43 -- common/autotest_common.sh@10 -- # set +x 00:20:28.816 ************************************ 00:20:28.816 START TEST dd_flag_noatime_forced_aio 00:20:28.816 ************************************ 00:20:28.816 04:57:43 -- common/autotest_common.sh@1104 -- # noatime 00:20:28.816 04:57:43 -- dd/posix.sh@53 -- # local atime_if 00:20:28.816 04:57:43 -- dd/posix.sh@54 -- # local atime_of 00:20:28.816 04:57:43 -- dd/posix.sh@58 -- # gen_bytes 512 00:20:28.816 04:57:43 -- dd/common.sh@98 -- # xtrace_disable 00:20:28.816 04:57:43 -- common/autotest_common.sh@10 -- # set +x 00:20:28.816 04:57:43 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:28.816 04:57:43 -- dd/posix.sh@60 -- # atime_if=1715749061 00:20:29.074 04:57:43 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:29.074 04:57:43 -- dd/posix.sh@61 -- # atime_of=1715749062 00:20:29.074 04:57:43 -- dd/posix.sh@66 -- # sleep 1 00:20:30.009 04:57:44 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:30.009 [2024-05-15 04:57:44.197143] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:30.009 [2024-05-15 04:57:44.197290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59693 ] 00:20:30.266 [2024-05-15 04:57:44.350319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.531 [2024-05-15 04:57:44.593369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.476  Copying: 512/512 [B] (average 500 kBps) 00:20:32.476 00:20:32.476 04:57:46 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:32.476 04:57:46 -- dd/posix.sh@69 -- # (( atime_if == 1715749061 )) 00:20:32.476 04:57:46 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:32.476 04:57:46 -- dd/posix.sh@70 -- # (( atime_of == 1715749062 )) 00:20:32.476 04:57:46 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:32.476 [2024-05-15 04:57:46.522057] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:32.476 [2024-05-15 04:57:46.522221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59731 ] 00:20:32.476 [2024-05-15 04:57:46.685732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.735 [2024-05-15 04:57:46.930276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.721  Copying: 512/512 [B] (average 500 kBps) 00:20:34.721 00:20:34.721 04:57:48 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:34.721 ************************************ 00:20:34.721 END TEST dd_flag_noatime_forced_aio 00:20:34.721 ************************************ 00:20:34.721 04:57:48 -- dd/posix.sh@73 -- # (( atime_if < 1715749067 )) 00:20:34.721 00:20:34.721 real 0m5.675s 00:20:34.721 user 0m3.590s 00:20:34.721 sys 0m0.680s 00:20:34.721 04:57:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.721 04:57:48 -- common/autotest_common.sh@10 -- # set +x 00:20:34.721 04:57:48 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:20:34.721 04:57:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:34.721 04:57:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:34.721 04:57:48 -- common/autotest_common.sh@10 -- # set +x 00:20:34.721 ************************************ 00:20:34.721 START TEST dd_flags_misc_forced_aio 00:20:34.721 ************************************ 00:20:34.721 04:57:48 -- common/autotest_common.sh@1104 -- # io 00:20:34.721 04:57:48 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:20:34.721 04:57:48 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:20:34.721 04:57:48 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:20:34.721 04:57:48 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:20:34.721 04:57:48 -- dd/posix.sh@86 -- # gen_bytes 512 00:20:34.721 04:57:48 -- dd/common.sh@98 -- # xtrace_disable 00:20:34.721 04:57:48 -- common/autotest_common.sh@10 -- # set +x 00:20:34.721 04:57:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:34.721 04:57:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:20:34.721 [2024-05-15 04:57:48.915752] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:34.721 [2024-05-15 04:57:48.915920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59779 ] 00:20:34.979 [2024-05-15 04:57:49.070935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.237 [2024-05-15 04:57:49.308517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.177  Copying: 512/512 [B] (average 500 kBps) 00:20:37.177 00:20:37.177 04:57:51 -- dd/posix.sh@93 -- # [[ 5f4ksl2wupc4031uu5eez2v2lknfdphyg24xnw5e4lg27ga4bh986mhezndc6mfco7tsk62qbsngd0uuv2tf66y30gzq1jibvin7cgvka5kj719aogymkozofwuryghw7vbh33msdbgylo238q1syhrunn97d0gsq8xhjvgbpyal4fvz7n5citm1zc8idb2rqzu5k4ja3mhk9dhaa58f2tjp9girobusmw1bm8x228sj6aatc0afe43kc9qbhp3rivhbgjopirq3ntzqis6t5vs5cjvyn1v8h6yn4mfopggcleojzqc1dg3xtgygn48410hefz18lkyubgfall6ht38f8uldewuqdryw5t2tevjdm2ns0u38treoptve0on2fcp1w2nslgh7u6p19gq8kpgn68v5uwdy43wdtdxdbcey8wqm8qljer4fcpqzyzj357vtx43fqxradxgmjl6vpxqhgr9amxrwqemrjvod3owa7hwet5oqc8hwfq39aobq == \5\f\4\k\s\l\2\w\u\p\c\4\0\3\1\u\u\5\e\e\z\2\v\2\l\k\n\f\d\p\h\y\g\2\4\x\n\w\5\e\4\l\g\2\7\g\a\4\b\h\9\8\6\m\h\e\z\n\d\c\6\m\f\c\o\7\t\s\k\6\2\q\b\s\n\g\d\0\u\u\v\2\t\f\6\6\y\3\0\g\z\q\1\j\i\b\v\i\n\7\c\g\v\k\a\5\k\j\7\1\9\a\o\g\y\m\k\o\z\o\f\w\u\r\y\g\h\w\7\v\b\h\3\3\m\s\d\b\g\y\l\o\2\3\8\q\1\s\y\h\r\u\n\n\9\7\d\0\g\s\q\8\x\h\j\v\g\b\p\y\a\l\4\f\v\z\7\n\5\c\i\t\m\1\z\c\8\i\d\b\2\r\q\z\u\5\k\4\j\a\3\m\h\k\9\d\h\a\a\5\8\f\2\t\j\p\9\g\i\r\o\b\u\s\m\w\1\b\m\8\x\2\2\8\s\j\6\a\a\t\c\0\a\f\e\4\3\k\c\9\q\b\h\p\3\r\i\v\h\b\g\j\o\p\i\r\q\3\n\t\z\q\i\s\6\t\5\v\s\5\c\j\v\y\n\1\v\8\h\6\y\n\4\m\f\o\p\g\g\c\l\e\o\j\z\q\c\1\d\g\3\x\t\g\y\g\n\4\8\4\1\0\h\e\f\z\1\8\l\k\y\u\b\g\f\a\l\l\6\h\t\3\8\f\8\u\l\d\e\w\u\q\d\r\y\w\5\t\2\t\e\v\j\d\m\2\n\s\0\u\3\8\t\r\e\o\p\t\v\e\0\o\n\2\f\c\p\1\w\2\n\s\l\g\h\7\u\6\p\1\9\g\q\8\k\p\g\n\6\8\v\5\u\w\d\y\4\3\w\d\t\d\x\d\b\c\e\y\8\w\q\m\8\q\l\j\e\r\4\f\c\p\q\z\y\z\j\3\5\7\v\t\x\4\3\f\q\x\r\a\d\x\g\m\j\l\6\v\p\x\q\h\g\r\9\a\m\x\r\w\q\e\m\r\j\v\o\d\3\o\w\a\7\h\w\e\t\5\o\q\c\8\h\w\f\q\3\9\a\o\b\q ]] 00:20:37.177 04:57:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:37.177 04:57:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:20:37.177 [2024-05-15 04:57:51.243342] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:37.177 [2024-05-15 04:57:51.243512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59812 ] 00:20:37.435 [2024-05-15 04:57:51.412093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.435 [2024-05-15 04:57:51.642531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.375  Copying: 512/512 [B] (average 500 kBps) 00:20:39.375 00:20:39.375 04:57:53 -- dd/posix.sh@93 -- # [[ 5f4ksl2wupc4031uu5eez2v2lknfdphyg24xnw5e4lg27ga4bh986mhezndc6mfco7tsk62qbsngd0uuv2tf66y30gzq1jibvin7cgvka5kj719aogymkozofwuryghw7vbh33msdbgylo238q1syhrunn97d0gsq8xhjvgbpyal4fvz7n5citm1zc8idb2rqzu5k4ja3mhk9dhaa58f2tjp9girobusmw1bm8x228sj6aatc0afe43kc9qbhp3rivhbgjopirq3ntzqis6t5vs5cjvyn1v8h6yn4mfopggcleojzqc1dg3xtgygn48410hefz18lkyubgfall6ht38f8uldewuqdryw5t2tevjdm2ns0u38treoptve0on2fcp1w2nslgh7u6p19gq8kpgn68v5uwdy43wdtdxdbcey8wqm8qljer4fcpqzyzj357vtx43fqxradxgmjl6vpxqhgr9amxrwqemrjvod3owa7hwet5oqc8hwfq39aobq == \5\f\4\k\s\l\2\w\u\p\c\4\0\3\1\u\u\5\e\e\z\2\v\2\l\k\n\f\d\p\h\y\g\2\4\x\n\w\5\e\4\l\g\2\7\g\a\4\b\h\9\8\6\m\h\e\z\n\d\c\6\m\f\c\o\7\t\s\k\6\2\q\b\s\n\g\d\0\u\u\v\2\t\f\6\6\y\3\0\g\z\q\1\j\i\b\v\i\n\7\c\g\v\k\a\5\k\j\7\1\9\a\o\g\y\m\k\o\z\o\f\w\u\r\y\g\h\w\7\v\b\h\3\3\m\s\d\b\g\y\l\o\2\3\8\q\1\s\y\h\r\u\n\n\9\7\d\0\g\s\q\8\x\h\j\v\g\b\p\y\a\l\4\f\v\z\7\n\5\c\i\t\m\1\z\c\8\i\d\b\2\r\q\z\u\5\k\4\j\a\3\m\h\k\9\d\h\a\a\5\8\f\2\t\j\p\9\g\i\r\o\b\u\s\m\w\1\b\m\8\x\2\2\8\s\j\6\a\a\t\c\0\a\f\e\4\3\k\c\9\q\b\h\p\3\r\i\v\h\b\g\j\o\p\i\r\q\3\n\t\z\q\i\s\6\t\5\v\s\5\c\j\v\y\n\1\v\8\h\6\y\n\4\m\f\o\p\g\g\c\l\e\o\j\z\q\c\1\d\g\3\x\t\g\y\g\n\4\8\4\1\0\h\e\f\z\1\8\l\k\y\u\b\g\f\a\l\l\6\h\t\3\8\f\8\u\l\d\e\w\u\q\d\r\y\w\5\t\2\t\e\v\j\d\m\2\n\s\0\u\3\8\t\r\e\o\p\t\v\e\0\o\n\2\f\c\p\1\w\2\n\s\l\g\h\7\u\6\p\1\9\g\q\8\k\p\g\n\6\8\v\5\u\w\d\y\4\3\w\d\t\d\x\d\b\c\e\y\8\w\q\m\8\q\l\j\e\r\4\f\c\p\q\z\y\z\j\3\5\7\v\t\x\4\3\f\q\x\r\a\d\x\g\m\j\l\6\v\p\x\q\h\g\r\9\a\m\x\r\w\q\e\m\r\j\v\o\d\3\o\w\a\7\h\w\e\t\5\o\q\c\8\h\w\f\q\3\9\a\o\b\q ]] 00:20:39.375 04:57:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:39.375 04:57:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:20:39.375 [2024-05-15 04:57:53.560671] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:39.375 [2024-05-15 04:57:53.560881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59848 ] 00:20:39.634 [2024-05-15 04:57:53.715570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.921 [2024-05-15 04:57:53.955312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.550  Copying: 512/512 [B] (average 166 kBps) 00:20:41.550 00:20:41.550 04:57:55 -- dd/posix.sh@93 -- # [[ 5f4ksl2wupc4031uu5eez2v2lknfdphyg24xnw5e4lg27ga4bh986mhezndc6mfco7tsk62qbsngd0uuv2tf66y30gzq1jibvin7cgvka5kj719aogymkozofwuryghw7vbh33msdbgylo238q1syhrunn97d0gsq8xhjvgbpyal4fvz7n5citm1zc8idb2rqzu5k4ja3mhk9dhaa58f2tjp9girobusmw1bm8x228sj6aatc0afe43kc9qbhp3rivhbgjopirq3ntzqis6t5vs5cjvyn1v8h6yn4mfopggcleojzqc1dg3xtgygn48410hefz18lkyubgfall6ht38f8uldewuqdryw5t2tevjdm2ns0u38treoptve0on2fcp1w2nslgh7u6p19gq8kpgn68v5uwdy43wdtdxdbcey8wqm8qljer4fcpqzyzj357vtx43fqxradxgmjl6vpxqhgr9amxrwqemrjvod3owa7hwet5oqc8hwfq39aobq == \5\f\4\k\s\l\2\w\u\p\c\4\0\3\1\u\u\5\e\e\z\2\v\2\l\k\n\f\d\p\h\y\g\2\4\x\n\w\5\e\4\l\g\2\7\g\a\4\b\h\9\8\6\m\h\e\z\n\d\c\6\m\f\c\o\7\t\s\k\6\2\q\b\s\n\g\d\0\u\u\v\2\t\f\6\6\y\3\0\g\z\q\1\j\i\b\v\i\n\7\c\g\v\k\a\5\k\j\7\1\9\a\o\g\y\m\k\o\z\o\f\w\u\r\y\g\h\w\7\v\b\h\3\3\m\s\d\b\g\y\l\o\2\3\8\q\1\s\y\h\r\u\n\n\9\7\d\0\g\s\q\8\x\h\j\v\g\b\p\y\a\l\4\f\v\z\7\n\5\c\i\t\m\1\z\c\8\i\d\b\2\r\q\z\u\5\k\4\j\a\3\m\h\k\9\d\h\a\a\5\8\f\2\t\j\p\9\g\i\r\o\b\u\s\m\w\1\b\m\8\x\2\2\8\s\j\6\a\a\t\c\0\a\f\e\4\3\k\c\9\q\b\h\p\3\r\i\v\h\b\g\j\o\p\i\r\q\3\n\t\z\q\i\s\6\t\5\v\s\5\c\j\v\y\n\1\v\8\h\6\y\n\4\m\f\o\p\g\g\c\l\e\o\j\z\q\c\1\d\g\3\x\t\g\y\g\n\4\8\4\1\0\h\e\f\z\1\8\l\k\y\u\b\g\f\a\l\l\6\h\t\3\8\f\8\u\l\d\e\w\u\q\d\r\y\w\5\t\2\t\e\v\j\d\m\2\n\s\0\u\3\8\t\r\e\o\p\t\v\e\0\o\n\2\f\c\p\1\w\2\n\s\l\g\h\7\u\6\p\1\9\g\q\8\k\p\g\n\6\8\v\5\u\w\d\y\4\3\w\d\t\d\x\d\b\c\e\y\8\w\q\m\8\q\l\j\e\r\4\f\c\p\q\z\y\z\j\3\5\7\v\t\x\4\3\f\q\x\r\a\d\x\g\m\j\l\6\v\p\x\q\h\g\r\9\a\m\x\r\w\q\e\m\r\j\v\o\d\3\o\w\a\7\h\w\e\t\5\o\q\c\8\h\w\f\q\3\9\a\o\b\q ]] 00:20:41.550 04:57:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:41.550 04:57:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:20:41.808 [2024-05-15 04:57:55.867171] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:41.809 [2024-05-15 04:57:55.867329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:20:41.809 [2024-05-15 04:57:56.028909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.067 [2024-05-15 04:57:56.252300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.021  Copying: 512/512 [B] (average 250 kBps) 00:20:44.021 00:20:44.021 04:57:58 -- dd/posix.sh@93 -- # [[ 5f4ksl2wupc4031uu5eez2v2lknfdphyg24xnw5e4lg27ga4bh986mhezndc6mfco7tsk62qbsngd0uuv2tf66y30gzq1jibvin7cgvka5kj719aogymkozofwuryghw7vbh33msdbgylo238q1syhrunn97d0gsq8xhjvgbpyal4fvz7n5citm1zc8idb2rqzu5k4ja3mhk9dhaa58f2tjp9girobusmw1bm8x228sj6aatc0afe43kc9qbhp3rivhbgjopirq3ntzqis6t5vs5cjvyn1v8h6yn4mfopggcleojzqc1dg3xtgygn48410hefz18lkyubgfall6ht38f8uldewuqdryw5t2tevjdm2ns0u38treoptve0on2fcp1w2nslgh7u6p19gq8kpgn68v5uwdy43wdtdxdbcey8wqm8qljer4fcpqzyzj357vtx43fqxradxgmjl6vpxqhgr9amxrwqemrjvod3owa7hwet5oqc8hwfq39aobq == \5\f\4\k\s\l\2\w\u\p\c\4\0\3\1\u\u\5\e\e\z\2\v\2\l\k\n\f\d\p\h\y\g\2\4\x\n\w\5\e\4\l\g\2\7\g\a\4\b\h\9\8\6\m\h\e\z\n\d\c\6\m\f\c\o\7\t\s\k\6\2\q\b\s\n\g\d\0\u\u\v\2\t\f\6\6\y\3\0\g\z\q\1\j\i\b\v\i\n\7\c\g\v\k\a\5\k\j\7\1\9\a\o\g\y\m\k\o\z\o\f\w\u\r\y\g\h\w\7\v\b\h\3\3\m\s\d\b\g\y\l\o\2\3\8\q\1\s\y\h\r\u\n\n\9\7\d\0\g\s\q\8\x\h\j\v\g\b\p\y\a\l\4\f\v\z\7\n\5\c\i\t\m\1\z\c\8\i\d\b\2\r\q\z\u\5\k\4\j\a\3\m\h\k\9\d\h\a\a\5\8\f\2\t\j\p\9\g\i\r\o\b\u\s\m\w\1\b\m\8\x\2\2\8\s\j\6\a\a\t\c\0\a\f\e\4\3\k\c\9\q\b\h\p\3\r\i\v\h\b\g\j\o\p\i\r\q\3\n\t\z\q\i\s\6\t\5\v\s\5\c\j\v\y\n\1\v\8\h\6\y\n\4\m\f\o\p\g\g\c\l\e\o\j\z\q\c\1\d\g\3\x\t\g\y\g\n\4\8\4\1\0\h\e\f\z\1\8\l\k\y\u\b\g\f\a\l\l\6\h\t\3\8\f\8\u\l\d\e\w\u\q\d\r\y\w\5\t\2\t\e\v\j\d\m\2\n\s\0\u\3\8\t\r\e\o\p\t\v\e\0\o\n\2\f\c\p\1\w\2\n\s\l\g\h\7\u\6\p\1\9\g\q\8\k\p\g\n\6\8\v\5\u\w\d\y\4\3\w\d\t\d\x\d\b\c\e\y\8\w\q\m\8\q\l\j\e\r\4\f\c\p\q\z\y\z\j\3\5\7\v\t\x\4\3\f\q\x\r\a\d\x\g\m\j\l\6\v\p\x\q\h\g\r\9\a\m\x\r\w\q\e\m\r\j\v\o\d\3\o\w\a\7\h\w\e\t\5\o\q\c\8\h\w\f\q\3\9\a\o\b\q ]] 00:20:44.021 04:57:58 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:20:44.021 04:57:58 -- dd/posix.sh@86 -- # gen_bytes 512 00:20:44.021 04:57:58 -- dd/common.sh@98 -- # xtrace_disable 00:20:44.021 04:57:58 -- common/autotest_common.sh@10 -- # set +x 00:20:44.021 04:57:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:44.021 04:57:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:20:44.021 [2024-05-15 04:57:58.165851] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:44.021 [2024-05-15 04:57:58.166007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59910 ] 00:20:44.277 [2024-05-15 04:57:58.317627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.535 [2024-05-15 04:57:58.544755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.167  Copying: 512/512 [B] (average 500 kBps) 00:20:46.167 00:20:46.167 04:58:00 -- dd/posix.sh@93 -- # [[ oucxcm2ha559pi6dnenjte9yp3j6yu3g9y7hgb4d7k9aaa7ka2g1i8inlnnkns14hpe2012fimx1nm7bg6uk7ujeexyo6u1c8gv58x6nkda5v1bpgx55otlx54iber4lkjl1ffositi9g9w98b0za6igtba5glv4w82gx25x48xv1qz6kt64jk2br0vzhmees47e9v8u81k5nzm7k8u5fj58sp9pq7bxlhz0j2b5ao8qaqtc06asm6eo4jgl9jkz86npopmgaola9zohae5tm2qhcdivamcegyb4ccxvpn7pkqe8w5wxeaodi6evg0zt127s2w4wvle1s9bpceo9av5ok6t4r15jo4bgq2gl2ytkmmov6d5nqfnqh8azclnf3vnimv421jfbnxcchwp6sxds4ojn7dx0si5n0en2l3oeyk73qf3iy3iu1zkmy3t6sces7zbqhz1f5b5kmo7y1qgdhrcwbu3wmdu8l46w20feyo73gdsiufqdww3mdn7y == \o\u\c\x\c\m\2\h\a\5\5\9\p\i\6\d\n\e\n\j\t\e\9\y\p\3\j\6\y\u\3\g\9\y\7\h\g\b\4\d\7\k\9\a\a\a\7\k\a\2\g\1\i\8\i\n\l\n\n\k\n\s\1\4\h\p\e\2\0\1\2\f\i\m\x\1\n\m\7\b\g\6\u\k\7\u\j\e\e\x\y\o\6\u\1\c\8\g\v\5\8\x\6\n\k\d\a\5\v\1\b\p\g\x\5\5\o\t\l\x\5\4\i\b\e\r\4\l\k\j\l\1\f\f\o\s\i\t\i\9\g\9\w\9\8\b\0\z\a\6\i\g\t\b\a\5\g\l\v\4\w\8\2\g\x\2\5\x\4\8\x\v\1\q\z\6\k\t\6\4\j\k\2\b\r\0\v\z\h\m\e\e\s\4\7\e\9\v\8\u\8\1\k\5\n\z\m\7\k\8\u\5\f\j\5\8\s\p\9\p\q\7\b\x\l\h\z\0\j\2\b\5\a\o\8\q\a\q\t\c\0\6\a\s\m\6\e\o\4\j\g\l\9\j\k\z\8\6\n\p\o\p\m\g\a\o\l\a\9\z\o\h\a\e\5\t\m\2\q\h\c\d\i\v\a\m\c\e\g\y\b\4\c\c\x\v\p\n\7\p\k\q\e\8\w\5\w\x\e\a\o\d\i\6\e\v\g\0\z\t\1\2\7\s\2\w\4\w\v\l\e\1\s\9\b\p\c\e\o\9\a\v\5\o\k\6\t\4\r\1\5\j\o\4\b\g\q\2\g\l\2\y\t\k\m\m\o\v\6\d\5\n\q\f\n\q\h\8\a\z\c\l\n\f\3\v\n\i\m\v\4\2\1\j\f\b\n\x\c\c\h\w\p\6\s\x\d\s\4\o\j\n\7\d\x\0\s\i\5\n\0\e\n\2\l\3\o\e\y\k\7\3\q\f\3\i\y\3\i\u\1\z\k\m\y\3\t\6\s\c\e\s\7\z\b\q\h\z\1\f\5\b\5\k\m\o\7\y\1\q\g\d\h\r\c\w\b\u\3\w\m\d\u\8\l\4\6\w\2\0\f\e\y\o\7\3\g\d\s\i\u\f\q\d\w\w\3\m\d\n\7\y ]] 00:20:46.167 04:58:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:46.167 04:58:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:20:46.425 [2024-05-15 04:58:00.467622] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:46.425 [2024-05-15 04:58:00.467917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59938 ] 00:20:46.425 [2024-05-15 04:58:00.618318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.683 [2024-05-15 04:58:00.845201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.621  Copying: 512/512 [B] (average 500 kBps) 00:20:48.621 00:20:48.621 04:58:02 -- dd/posix.sh@93 -- # [[ oucxcm2ha559pi6dnenjte9yp3j6yu3g9y7hgb4d7k9aaa7ka2g1i8inlnnkns14hpe2012fimx1nm7bg6uk7ujeexyo6u1c8gv58x6nkda5v1bpgx55otlx54iber4lkjl1ffositi9g9w98b0za6igtba5glv4w82gx25x48xv1qz6kt64jk2br0vzhmees47e9v8u81k5nzm7k8u5fj58sp9pq7bxlhz0j2b5ao8qaqtc06asm6eo4jgl9jkz86npopmgaola9zohae5tm2qhcdivamcegyb4ccxvpn7pkqe8w5wxeaodi6evg0zt127s2w4wvle1s9bpceo9av5ok6t4r15jo4bgq2gl2ytkmmov6d5nqfnqh8azclnf3vnimv421jfbnxcchwp6sxds4ojn7dx0si5n0en2l3oeyk73qf3iy3iu1zkmy3t6sces7zbqhz1f5b5kmo7y1qgdhrcwbu3wmdu8l46w20feyo73gdsiufqdww3mdn7y == \o\u\c\x\c\m\2\h\a\5\5\9\p\i\6\d\n\e\n\j\t\e\9\y\p\3\j\6\y\u\3\g\9\y\7\h\g\b\4\d\7\k\9\a\a\a\7\k\a\2\g\1\i\8\i\n\l\n\n\k\n\s\1\4\h\p\e\2\0\1\2\f\i\m\x\1\n\m\7\b\g\6\u\k\7\u\j\e\e\x\y\o\6\u\1\c\8\g\v\5\8\x\6\n\k\d\a\5\v\1\b\p\g\x\5\5\o\t\l\x\5\4\i\b\e\r\4\l\k\j\l\1\f\f\o\s\i\t\i\9\g\9\w\9\8\b\0\z\a\6\i\g\t\b\a\5\g\l\v\4\w\8\2\g\x\2\5\x\4\8\x\v\1\q\z\6\k\t\6\4\j\k\2\b\r\0\v\z\h\m\e\e\s\4\7\e\9\v\8\u\8\1\k\5\n\z\m\7\k\8\u\5\f\j\5\8\s\p\9\p\q\7\b\x\l\h\z\0\j\2\b\5\a\o\8\q\a\q\t\c\0\6\a\s\m\6\e\o\4\j\g\l\9\j\k\z\8\6\n\p\o\p\m\g\a\o\l\a\9\z\o\h\a\e\5\t\m\2\q\h\c\d\i\v\a\m\c\e\g\y\b\4\c\c\x\v\p\n\7\p\k\q\e\8\w\5\w\x\e\a\o\d\i\6\e\v\g\0\z\t\1\2\7\s\2\w\4\w\v\l\e\1\s\9\b\p\c\e\o\9\a\v\5\o\k\6\t\4\r\1\5\j\o\4\b\g\q\2\g\l\2\y\t\k\m\m\o\v\6\d\5\n\q\f\n\q\h\8\a\z\c\l\n\f\3\v\n\i\m\v\4\2\1\j\f\b\n\x\c\c\h\w\p\6\s\x\d\s\4\o\j\n\7\d\x\0\s\i\5\n\0\e\n\2\l\3\o\e\y\k\7\3\q\f\3\i\y\3\i\u\1\z\k\m\y\3\t\6\s\c\e\s\7\z\b\q\h\z\1\f\5\b\5\k\m\o\7\y\1\q\g\d\h\r\c\w\b\u\3\w\m\d\u\8\l\4\6\w\2\0\f\e\y\o\7\3\g\d\s\i\u\f\q\d\w\w\3\m\d\n\7\y ]] 00:20:48.621 04:58:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:48.621 04:58:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:20:48.621 [2024-05-15 04:58:02.762433] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:48.621 [2024-05-15 04:58:02.762602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59970 ] 00:20:48.878 [2024-05-15 04:58:02.913698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.135 [2024-05-15 04:58:03.145877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.768  Copying: 512/512 [B] (average 250 kBps) 00:20:50.768 00:20:50.768 04:58:04 -- dd/posix.sh@93 -- # [[ oucxcm2ha559pi6dnenjte9yp3j6yu3g9y7hgb4d7k9aaa7ka2g1i8inlnnkns14hpe2012fimx1nm7bg6uk7ujeexyo6u1c8gv58x6nkda5v1bpgx55otlx54iber4lkjl1ffositi9g9w98b0za6igtba5glv4w82gx25x48xv1qz6kt64jk2br0vzhmees47e9v8u81k5nzm7k8u5fj58sp9pq7bxlhz0j2b5ao8qaqtc06asm6eo4jgl9jkz86npopmgaola9zohae5tm2qhcdivamcegyb4ccxvpn7pkqe8w5wxeaodi6evg0zt127s2w4wvle1s9bpceo9av5ok6t4r15jo4bgq2gl2ytkmmov6d5nqfnqh8azclnf3vnimv421jfbnxcchwp6sxds4ojn7dx0si5n0en2l3oeyk73qf3iy3iu1zkmy3t6sces7zbqhz1f5b5kmo7y1qgdhrcwbu3wmdu8l46w20feyo73gdsiufqdww3mdn7y == \o\u\c\x\c\m\2\h\a\5\5\9\p\i\6\d\n\e\n\j\t\e\9\y\p\3\j\6\y\u\3\g\9\y\7\h\g\b\4\d\7\k\9\a\a\a\7\k\a\2\g\1\i\8\i\n\l\n\n\k\n\s\1\4\h\p\e\2\0\1\2\f\i\m\x\1\n\m\7\b\g\6\u\k\7\u\j\e\e\x\y\o\6\u\1\c\8\g\v\5\8\x\6\n\k\d\a\5\v\1\b\p\g\x\5\5\o\t\l\x\5\4\i\b\e\r\4\l\k\j\l\1\f\f\o\s\i\t\i\9\g\9\w\9\8\b\0\z\a\6\i\g\t\b\a\5\g\l\v\4\w\8\2\g\x\2\5\x\4\8\x\v\1\q\z\6\k\t\6\4\j\k\2\b\r\0\v\z\h\m\e\e\s\4\7\e\9\v\8\u\8\1\k\5\n\z\m\7\k\8\u\5\f\j\5\8\s\p\9\p\q\7\b\x\l\h\z\0\j\2\b\5\a\o\8\q\a\q\t\c\0\6\a\s\m\6\e\o\4\j\g\l\9\j\k\z\8\6\n\p\o\p\m\g\a\o\l\a\9\z\o\h\a\e\5\t\m\2\q\h\c\d\i\v\a\m\c\e\g\y\b\4\c\c\x\v\p\n\7\p\k\q\e\8\w\5\w\x\e\a\o\d\i\6\e\v\g\0\z\t\1\2\7\s\2\w\4\w\v\l\e\1\s\9\b\p\c\e\o\9\a\v\5\o\k\6\t\4\r\1\5\j\o\4\b\g\q\2\g\l\2\y\t\k\m\m\o\v\6\d\5\n\q\f\n\q\h\8\a\z\c\l\n\f\3\v\n\i\m\v\4\2\1\j\f\b\n\x\c\c\h\w\p\6\s\x\d\s\4\o\j\n\7\d\x\0\s\i\5\n\0\e\n\2\l\3\o\e\y\k\7\3\q\f\3\i\y\3\i\u\1\z\k\m\y\3\t\6\s\c\e\s\7\z\b\q\h\z\1\f\5\b\5\k\m\o\7\y\1\q\g\d\h\r\c\w\b\u\3\w\m\d\u\8\l\4\6\w\2\0\f\e\y\o\7\3\g\d\s\i\u\f\q\d\w\w\3\m\d\n\7\y ]] 00:20:50.768 04:58:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:20:50.768 04:58:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:20:51.025 [2024-05-15 04:58:05.074160] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:51.025 [2024-05-15 04:58:05.074324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60002 ] 00:20:51.025 [2024-05-15 04:58:05.242024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.282 [2024-05-15 04:58:05.485493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.223  Copying: 512/512 [B] (average 250 kBps) 00:20:53.223 00:20:53.223 ************************************ 00:20:53.223 END TEST dd_flags_misc_forced_aio 00:20:53.223 ************************************ 00:20:53.223 04:58:07 -- dd/posix.sh@93 -- # [[ oucxcm2ha559pi6dnenjte9yp3j6yu3g9y7hgb4d7k9aaa7ka2g1i8inlnnkns14hpe2012fimx1nm7bg6uk7ujeexyo6u1c8gv58x6nkda5v1bpgx55otlx54iber4lkjl1ffositi9g9w98b0za6igtba5glv4w82gx25x48xv1qz6kt64jk2br0vzhmees47e9v8u81k5nzm7k8u5fj58sp9pq7bxlhz0j2b5ao8qaqtc06asm6eo4jgl9jkz86npopmgaola9zohae5tm2qhcdivamcegyb4ccxvpn7pkqe8w5wxeaodi6evg0zt127s2w4wvle1s9bpceo9av5ok6t4r15jo4bgq2gl2ytkmmov6d5nqfnqh8azclnf3vnimv421jfbnxcchwp6sxds4ojn7dx0si5n0en2l3oeyk73qf3iy3iu1zkmy3t6sces7zbqhz1f5b5kmo7y1qgdhrcwbu3wmdu8l46w20feyo73gdsiufqdww3mdn7y == \o\u\c\x\c\m\2\h\a\5\5\9\p\i\6\d\n\e\n\j\t\e\9\y\p\3\j\6\y\u\3\g\9\y\7\h\g\b\4\d\7\k\9\a\a\a\7\k\a\2\g\1\i\8\i\n\l\n\n\k\n\s\1\4\h\p\e\2\0\1\2\f\i\m\x\1\n\m\7\b\g\6\u\k\7\u\j\e\e\x\y\o\6\u\1\c\8\g\v\5\8\x\6\n\k\d\a\5\v\1\b\p\g\x\5\5\o\t\l\x\5\4\i\b\e\r\4\l\k\j\l\1\f\f\o\s\i\t\i\9\g\9\w\9\8\b\0\z\a\6\i\g\t\b\a\5\g\l\v\4\w\8\2\g\x\2\5\x\4\8\x\v\1\q\z\6\k\t\6\4\j\k\2\b\r\0\v\z\h\m\e\e\s\4\7\e\9\v\8\u\8\1\k\5\n\z\m\7\k\8\u\5\f\j\5\8\s\p\9\p\q\7\b\x\l\h\z\0\j\2\b\5\a\o\8\q\a\q\t\c\0\6\a\s\m\6\e\o\4\j\g\l\9\j\k\z\8\6\n\p\o\p\m\g\a\o\l\a\9\z\o\h\a\e\5\t\m\2\q\h\c\d\i\v\a\m\c\e\g\y\b\4\c\c\x\v\p\n\7\p\k\q\e\8\w\5\w\x\e\a\o\d\i\6\e\v\g\0\z\t\1\2\7\s\2\w\4\w\v\l\e\1\s\9\b\p\c\e\o\9\a\v\5\o\k\6\t\4\r\1\5\j\o\4\b\g\q\2\g\l\2\y\t\k\m\m\o\v\6\d\5\n\q\f\n\q\h\8\a\z\c\l\n\f\3\v\n\i\m\v\4\2\1\j\f\b\n\x\c\c\h\w\p\6\s\x\d\s\4\o\j\n\7\d\x\0\s\i\5\n\0\e\n\2\l\3\o\e\y\k\7\3\q\f\3\i\y\3\i\u\1\z\k\m\y\3\t\6\s\c\e\s\7\z\b\q\h\z\1\f\5\b\5\k\m\o\7\y\1\q\g\d\h\r\c\w\b\u\3\w\m\d\u\8\l\4\6\w\2\0\f\e\y\o\7\3\g\d\s\i\u\f\q\d\w\w\3\m\d\n\7\y ]] 00:20:53.223 00:20:53.223 real 0m18.482s 00:20:53.223 user 0m14.275s 00:20:53.223 sys 0m2.581s 00:20:53.223 04:58:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:53.223 04:58:07 -- common/autotest_common.sh@10 -- # set +x 00:20:53.223 04:58:07 -- dd/posix.sh@1 -- # cleanup 00:20:53.223 04:58:07 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:20:53.223 04:58:07 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:20:53.223 ************************************ 00:20:53.223 END TEST spdk_dd_posix 00:20:53.223 ************************************ 00:20:53.223 00:20:53.223 real 1m16.282s 00:20:53.223 user 0m57.071s 00:20:53.223 sys 0m10.692s 00:20:53.223 04:58:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:53.223 04:58:07 -- common/autotest_common.sh@10 -- # set +x 00:20:53.223 04:58:07 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:20:53.223 04:58:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:53.223 04:58:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:53.223 04:58:07 -- common/autotest_common.sh@10 -- # set +x 00:20:53.223 ************************************ 00:20:53.223 START TEST spdk_dd_malloc 00:20:53.223 ************************************ 00:20:53.223 04:58:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:20:53.223 * Looking for test storage... 00:20:53.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:20:53.223 04:58:07 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:53.223 04:58:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.223 04:58:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.223 04:58:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.223 04:58:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:20:53.223 04:58:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:20:53.223 04:58:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:20:53.223 04:58:07 -- paths/export.sh@5 -- # export PATH 00:20:53.223 04:58:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:20:53.223 04:58:07 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:20:53.223 04:58:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:53.223 04:58:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:53.223 04:58:07 -- common/autotest_common.sh@10 -- # set +x 00:20:53.482 ************************************ 00:20:53.482 START TEST dd_malloc_copy 00:20:53.482 ************************************ 00:20:53.482 04:58:07 -- common/autotest_common.sh@1104 -- # malloc_copy 00:20:53.482 04:58:07 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:20:53.482 04:58:07 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:20:53.482 04:58:07 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:20:53.482 04:58:07 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:20:53.482 04:58:07 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:20:53.482 04:58:07 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:20:53.482 04:58:07 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:20:53.482 04:58:07 -- dd/malloc.sh@28 -- # gen_conf 00:20:53.482 04:58:07 -- dd/common.sh@31 -- # xtrace_disable 00:20:53.482 04:58:07 -- common/autotest_common.sh@10 -- # set +x 00:20:53.482 { 00:20:53.482 "subsystems": [ 00:20:53.482 { 00:20:53.482 "subsystem": "bdev", 00:20:53.482 "config": [ 00:20:53.482 { 00:20:53.482 "params": { 00:20:53.482 "block_size": 512, 00:20:53.482 "name": "malloc0", 00:20:53.482 "num_blocks": 1048576 00:20:53.482 }, 00:20:53.482 "method": "bdev_malloc_create" 00:20:53.482 }, 00:20:53.482 { 00:20:53.482 "params": { 00:20:53.482 "block_size": 512, 00:20:53.482 "name": "malloc1", 00:20:53.482 "num_blocks": 1048576 00:20:53.482 }, 00:20:53.482 "method": "bdev_malloc_create" 00:20:53.482 }, 00:20:53.482 { 00:20:53.482 "method": "bdev_wait_for_examine" 00:20:53.482 } 00:20:53.482 ] 00:20:53.482 } 00:20:53.482 ] 00:20:53.482 } 00:20:53.482 [2024-05-15 04:58:07.605178] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:53.482 [2024-05-15 04:58:07.605336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60105 ] 00:20:53.741 [2024-05-15 04:58:07.758569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.999 [2024-05-15 04:58:07.996255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.749  Copying: 512/512 [MB] (average 613 MBps) 00:21:00.749 00:21:00.749 04:58:14 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:21:00.749 04:58:14 -- dd/malloc.sh@33 -- # gen_conf 00:21:00.749 04:58:14 -- dd/common.sh@31 -- # xtrace_disable 00:21:00.749 04:58:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.749 { 00:21:00.749 "subsystems": [ 00:21:00.749 { 00:21:00.749 "subsystem": "bdev", 00:21:00.749 "config": [ 00:21:00.749 { 00:21:00.749 "params": { 00:21:00.749 "block_size": 512, 00:21:00.749 "name": "malloc0", 00:21:00.749 "num_blocks": 1048576 00:21:00.749 }, 00:21:00.749 "method": "bdev_malloc_create" 00:21:00.749 }, 00:21:00.749 { 00:21:00.749 "params": { 00:21:00.749 "block_size": 512, 00:21:00.749 "name": "malloc1", 00:21:00.749 "num_blocks": 1048576 00:21:00.749 }, 00:21:00.749 "method": "bdev_malloc_create" 00:21:00.749 }, 00:21:00.749 { 00:21:00.749 "method": "bdev_wait_for_examine" 00:21:00.749 } 00:21:00.749 ] 00:21:00.749 } 00:21:00.749 ] 00:21:00.749 } 00:21:00.749 [2024-05-15 04:58:14.729899] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:00.749 [2024-05-15 04:58:14.730047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60202 ] 00:21:00.749 [2024-05-15 04:58:14.897361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.007 [2024-05-15 04:58:15.139575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.724  Copying: 512/512 [MB] (average 627 MBps) 00:21:07.724 00:21:07.724 ************************************ 00:21:07.724 END TEST dd_malloc_copy 00:21:07.724 ************************************ 00:21:07.724 00:21:07.724 real 0m14.292s 00:21:07.724 user 0m12.270s 00:21:07.724 sys 0m1.728s 00:21:07.724 04:58:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.724 04:58:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.724 ************************************ 00:21:07.724 END TEST spdk_dd_malloc 00:21:07.724 ************************************ 00:21:07.724 00:21:07.724 real 0m14.435s 00:21:07.724 user 0m12.335s 00:21:07.724 sys 0m1.814s 00:21:07.724 04:58:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.724 04:58:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.724 04:58:21 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:21:07.724 04:58:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:07.724 04:58:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:07.724 04:58:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.724 ************************************ 00:21:07.724 START TEST spdk_dd_bdev_to_bdev 00:21:07.724 ************************************ 00:21:07.724 04:58:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:21:07.724 * Looking for test storage... 00:21:07.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:21:07.724 04:58:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:07.724 04:58:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.724 04:58:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.724 04:58:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.724 04:58:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:07.724 04:58:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:07.724 04:58:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:07.724 04:58:21 -- paths/export.sh@5 -- # export PATH 00:21:07.724 04:58:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:07.724 04:58:21 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:21:07.724 04:58:21 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:21:07.724 04:58:21 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:21:07.724 04:58:21 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:21:07.725 04:58:21 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:21:07.725 04:58:21 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:21:07.725 04:58:21 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:21:07.725 04:58:21 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:21:07.725 04:58:21 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:21:07.725 04:58:21 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:21:07.725 04:58:21 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:21:07.725 04:58:21 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:21:07.725 04:58:21 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:21:07.725 04:58:21 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:21:07.983 [2024-05-15 04:58:22.074134] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:07.983 [2024-05-15 04:58:22.074283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60365 ] 00:21:08.241 [2024-05-15 04:58:22.234530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.241 [2024-05-15 04:58:22.463578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.552  Copying: 256/256 [MB] (average 1551 MBps) 00:21:10.552 00:21:10.552 04:58:24 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:21:10.552 04:58:24 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:21:10.552 04:58:24 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:21:10.552 04:58:24 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:21:10.552 04:58:24 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:21:10.552 04:58:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:21:10.552 04:58:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:10.552 04:58:24 -- common/autotest_common.sh@10 -- # set +x 00:21:10.552 ************************************ 00:21:10.552 START TEST dd_inflate_file 00:21:10.552 ************************************ 00:21:10.552 04:58:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:21:10.552 [2024-05-15 04:58:24.551625] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:10.552 [2024-05-15 04:58:24.551923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60399 ] 00:21:10.552 [2024-05-15 04:58:24.704642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.810 [2024-05-15 04:58:24.938848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.753  Copying: 64/64 [MB] (average 1560 MBps) 00:21:12.753 00:21:12.753 00:21:12.753 real 0m2.329s 00:21:12.753 user 0m1.773s 00:21:12.753 sys 0m0.353s 00:21:12.753 04:58:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:12.753 04:58:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.753 ************************************ 00:21:12.753 END TEST dd_inflate_file 00:21:12.753 ************************************ 00:21:12.753 04:58:26 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:21:12.753 04:58:26 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:21:12.753 04:58:26 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:21:12.753 04:58:26 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:21:12.753 04:58:26 -- dd/common.sh@31 -- # xtrace_disable 00:21:12.753 04:58:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.753 04:58:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:12.753 04:58:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:12.753 04:58:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.753 ************************************ 00:21:12.753 START TEST dd_copy_to_out_bdev 00:21:12.753 ************************************ 00:21:12.753 04:58:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:21:12.753 { 00:21:12.753 "subsystems": [ 00:21:12.753 { 00:21:12.753 "subsystem": "bdev", 00:21:12.753 "config": [ 00:21:12.753 { 00:21:12.753 "params": { 00:21:12.753 "block_size": 4096, 00:21:12.753 "name": "aio1", 00:21:12.753 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:21:12.753 }, 00:21:12.753 "method": "bdev_aio_create" 00:21:12.753 }, 00:21:12.753 { 00:21:12.753 "params": { 00:21:12.753 "trtype": "pcie", 00:21:12.753 "name": "Nvme0", 00:21:12.753 "traddr": "0000:00:06.0" 00:21:12.753 }, 00:21:12.753 "method": "bdev_nvme_attach_controller" 00:21:12.753 }, 00:21:12.753 { 00:21:12.753 "method": "bdev_wait_for_examine" 00:21:12.753 } 00:21:12.753 ] 00:21:12.753 } 00:21:12.753 ] 00:21:12.753 } 00:21:12.753 [2024-05-15 04:58:26.932328] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:12.753 [2024-05-15 04:58:26.932474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60462 ] 00:21:13.011 [2024-05-15 04:58:27.102773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.271 [2024-05-15 04:58:27.329731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.023  Copying: 64/64 [MB] (average 90 MBps) 00:21:16.023 00:21:16.023 ************************************ 00:21:16.023 END TEST dd_copy_to_out_bdev 00:21:16.023 ************************************ 00:21:16.023 00:21:16.023 real 0m3.127s 00:21:16.023 user 0m2.602s 00:21:16.023 sys 0m0.383s 00:21:16.023 04:58:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:16.023 04:58:29 -- common/autotest_common.sh@10 -- # set +x 00:21:16.023 04:58:29 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:21:16.023 04:58:29 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:21:16.023 04:58:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:16.023 04:58:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:16.023 04:58:29 -- common/autotest_common.sh@10 -- # set +x 00:21:16.023 ************************************ 00:21:16.023 START TEST dd_offset_magic 00:21:16.023 ************************************ 00:21:16.023 04:58:29 -- common/autotest_common.sh@1104 -- # offset_magic 00:21:16.023 04:58:29 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:21:16.023 04:58:29 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:21:16.023 04:58:29 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:21:16.023 04:58:29 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:21:16.023 04:58:29 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:21:16.023 04:58:29 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:21:16.023 04:58:29 -- dd/common.sh@31 -- # xtrace_disable 00:21:16.023 04:58:29 -- common/autotest_common.sh@10 -- # set +x 00:21:16.023 { 00:21:16.023 "subsystems": [ 00:21:16.023 { 00:21:16.023 "subsystem": "bdev", 00:21:16.023 "config": [ 00:21:16.023 { 00:21:16.023 "params": { 00:21:16.023 "block_size": 4096, 00:21:16.023 "name": "aio1", 00:21:16.023 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:21:16.023 }, 00:21:16.023 "method": "bdev_aio_create" 00:21:16.024 }, 00:21:16.024 { 00:21:16.024 "params": { 00:21:16.024 "trtype": "pcie", 00:21:16.024 "name": "Nvme0", 00:21:16.024 "traddr": "0000:00:06.0" 00:21:16.024 }, 00:21:16.024 "method": "bdev_nvme_attach_controller" 00:21:16.024 }, 00:21:16.024 { 00:21:16.024 "method": "bdev_wait_for_examine" 00:21:16.024 } 00:21:16.024 ] 00:21:16.024 } 00:21:16.024 ] 00:21:16.024 } 00:21:16.024 [2024-05-15 04:58:30.111240] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:16.024 [2024-05-15 04:58:30.111392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60526 ] 00:21:16.283 [2024-05-15 04:58:30.266674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.283 [2024-05-15 04:58:30.486868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.594  Copying: 65/65 [MB] (average 176 MBps) 00:21:18.594 00:21:18.594 04:58:32 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:21:18.594 04:58:32 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:21:18.594 04:58:32 -- dd/common.sh@31 -- # xtrace_disable 00:21:18.594 04:58:32 -- common/autotest_common.sh@10 -- # set +x 00:21:18.594 { 00:21:18.594 "subsystems": [ 00:21:18.594 { 00:21:18.594 "subsystem": "bdev", 00:21:18.594 "config": [ 00:21:18.594 { 00:21:18.594 "params": { 00:21:18.594 "block_size": 4096, 00:21:18.594 "name": "aio1", 00:21:18.594 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:21:18.594 }, 00:21:18.594 "method": "bdev_aio_create" 00:21:18.594 }, 00:21:18.594 { 00:21:18.594 "params": { 00:21:18.594 "trtype": "pcie", 00:21:18.595 "name": "Nvme0", 00:21:18.595 "traddr": "0000:00:06.0" 00:21:18.595 }, 00:21:18.595 "method": "bdev_nvme_attach_controller" 00:21:18.595 }, 00:21:18.595 { 00:21:18.595 "method": "bdev_wait_for_examine" 00:21:18.595 } 00:21:18.595 ] 00:21:18.595 } 00:21:18.595 ] 00:21:18.595 } 00:21:18.852 [2024-05-15 04:58:32.847610] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:18.852 [2024-05-15 04:58:32.847914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60570 ] 00:21:18.852 [2024-05-15 04:58:33.010534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.109 [2024-05-15 04:58:33.237420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.047  Copying: 1024/1024 [kB] (average 1000 MBps) 00:21:21.047 00:21:21.047 04:58:35 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:21:21.047 04:58:35 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:21:21.047 04:58:35 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:21:21.047 04:58:35 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:21:21.047 04:58:35 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:21:21.047 04:58:35 -- dd/common.sh@31 -- # xtrace_disable 00:21:21.047 04:58:35 -- common/autotest_common.sh@10 -- # set +x 00:21:21.047 { 00:21:21.047 "subsystems": [ 00:21:21.047 { 00:21:21.047 "subsystem": "bdev", 00:21:21.047 "config": [ 00:21:21.047 { 00:21:21.047 "params": { 00:21:21.047 "block_size": 4096, 00:21:21.047 "name": "aio1", 00:21:21.047 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:21:21.047 }, 00:21:21.047 "method": "bdev_aio_create" 00:21:21.047 }, 00:21:21.047 { 00:21:21.047 "params": { 00:21:21.047 "trtype": "pcie", 00:21:21.047 "name": "Nvme0", 00:21:21.047 "traddr": "0000:00:06.0" 00:21:21.047 }, 00:21:21.047 "method": "bdev_nvme_attach_controller" 00:21:21.047 }, 00:21:21.047 { 00:21:21.047 "method": "bdev_wait_for_examine" 00:21:21.047 } 00:21:21.047 ] 00:21:21.047 } 00:21:21.047 ] 00:21:21.047 } 00:21:21.047 [2024-05-15 04:58:35.225809] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:21.047 [2024-05-15 04:58:35.225957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60607 ] 00:21:21.305 [2024-05-15 04:58:35.392899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.563 [2024-05-15 04:58:35.608644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.876  Copying: 65/65 [MB] (average 196 MBps) 00:21:23.876 00:21:23.876 04:58:37 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:21:23.876 04:58:37 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:21:23.876 04:58:37 -- dd/common.sh@31 -- # xtrace_disable 00:21:23.876 04:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:23.876 { 00:21:23.876 "subsystems": [ 00:21:23.876 { 00:21:23.876 "subsystem": "bdev", 00:21:23.876 "config": [ 00:21:23.876 { 00:21:23.876 "params": { 00:21:23.876 "block_size": 4096, 00:21:23.876 "name": "aio1", 00:21:23.876 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:21:23.876 }, 00:21:23.876 "method": "bdev_aio_create" 00:21:23.876 }, 00:21:23.876 { 00:21:23.876 "params": { 00:21:23.876 "trtype": "pcie", 00:21:23.876 "name": "Nvme0", 00:21:23.876 "traddr": "0000:00:06.0" 00:21:23.876 }, 00:21:23.876 "method": "bdev_nvme_attach_controller" 00:21:23.876 }, 00:21:23.876 { 00:21:23.876 "method": "bdev_wait_for_examine" 00:21:23.876 } 00:21:23.876 ] 00:21:23.876 } 00:21:23.876 ] 00:21:23.876 } 00:21:23.876 [2024-05-15 04:58:37.942846] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:23.876 [2024-05-15 04:58:37.943010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60648 ] 00:21:23.876 [2024-05-15 04:58:38.101587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.155 [2024-05-15 04:58:38.328938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.106  Copying: 1024/1024 [kB] (average 500 MBps) 00:21:26.106 00:21:26.106 ************************************ 00:21:26.106 END TEST dd_offset_magic 00:21:26.106 ************************************ 00:21:26.106 04:58:40 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:21:26.106 04:58:40 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:21:26.106 00:21:26.106 real 0m10.201s 00:21:26.106 user 0m7.752s 00:21:26.106 sys 0m1.414s 00:21:26.106 04:58:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.106 04:58:40 -- common/autotest_common.sh@10 -- # set +x 00:21:26.106 04:58:40 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:21:26.106 04:58:40 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:21:26.106 04:58:40 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:21:26.106 04:58:40 -- dd/common.sh@11 -- # local nvme_ref= 00:21:26.106 04:58:40 -- dd/common.sh@12 -- # local size=4194330 00:21:26.106 04:58:40 -- dd/common.sh@14 -- # local bs=1048576 00:21:26.106 04:58:40 -- dd/common.sh@15 -- # local count=5 00:21:26.106 04:58:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:21:26.106 04:58:40 -- dd/common.sh@18 -- # gen_conf 00:21:26.106 04:58:40 -- dd/common.sh@31 -- # xtrace_disable 00:21:26.106 04:58:40 -- common/autotest_common.sh@10 -- # set +x 00:21:26.106 { 00:21:26.106 "subsystems": [ 00:21:26.106 { 00:21:26.106 "subsystem": "bdev", 00:21:26.106 "config": [ 00:21:26.106 { 00:21:26.106 "params": { 00:21:26.106 "block_size": 4096, 00:21:26.106 "name": "aio1", 00:21:26.106 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:21:26.106 }, 00:21:26.106 "method": "bdev_aio_create" 00:21:26.106 }, 00:21:26.106 { 00:21:26.106 "params": { 00:21:26.106 "trtype": "pcie", 00:21:26.106 "name": "Nvme0", 00:21:26.106 "traddr": "0000:00:06.0" 00:21:26.106 }, 00:21:26.106 "method": "bdev_nvme_attach_controller" 00:21:26.106 }, 00:21:26.106 { 00:21:26.106 "method": "bdev_wait_for_examine" 00:21:26.106 } 00:21:26.106 ] 00:21:26.106 } 00:21:26.106 ] 00:21:26.106 } 00:21:26.365 [2024-05-15 04:58:40.353353] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:26.365 [2024-05-15 04:58:40.353524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60697 ] 00:21:26.365 [2024-05-15 04:58:40.529378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.623 [2024-05-15 04:58:40.750594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.564  Copying: 5120/5120 [kB] (average 1000 MBps) 00:21:28.564 00:21:28.564 04:58:42 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:21:28.564 04:58:42 -- dd/common.sh@10 -- # local bdev=aio1 00:21:28.564 04:58:42 -- dd/common.sh@11 -- # local nvme_ref= 00:21:28.564 04:58:42 -- dd/common.sh@12 -- # local size=4194330 00:21:28.564 04:58:42 -- dd/common.sh@14 -- # local bs=1048576 00:21:28.564 04:58:42 -- dd/common.sh@15 -- # local count=5 00:21:28.564 04:58:42 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:21:28.564 04:58:42 -- dd/common.sh@18 -- # gen_conf 00:21:28.564 04:58:42 -- dd/common.sh@31 -- # xtrace_disable 00:21:28.564 04:58:42 -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 { 00:21:28.564 "subsystems": [ 00:21:28.564 { 00:21:28.564 "subsystem": "bdev", 00:21:28.564 "config": [ 00:21:28.564 { 00:21:28.564 "params": { 00:21:28.564 "block_size": 4096, 00:21:28.564 "name": "aio1", 00:21:28.564 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:21:28.564 }, 00:21:28.564 "method": "bdev_aio_create" 00:21:28.564 }, 00:21:28.564 { 00:21:28.564 "params": { 00:21:28.564 "trtype": "pcie", 00:21:28.564 "name": "Nvme0", 00:21:28.564 "traddr": "0000:00:06.0" 00:21:28.564 }, 00:21:28.564 "method": "bdev_nvme_attach_controller" 00:21:28.564 }, 00:21:28.564 { 00:21:28.564 "method": "bdev_wait_for_examine" 00:21:28.564 } 00:21:28.564 ] 00:21:28.564 } 00:21:28.564 ] 00:21:28.564 } 00:21:28.564 [2024-05-15 04:58:42.757672] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:28.564 [2024-05-15 04:58:42.757865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60738 ] 00:21:28.823 [2024-05-15 04:58:42.911428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.089 [2024-05-15 04:58:43.133095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.042  Copying: 5120/5120 [kB] (average 142 MBps) 00:21:31.042 00:21:31.042 04:58:45 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:21:31.042 00:21:31.042 real 0m23.239s 00:21:31.042 user 0m17.816s 00:21:31.042 sys 0m3.515s 00:21:31.042 04:58:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.042 ************************************ 00:21:31.042 END TEST spdk_dd_bdev_to_bdev 00:21:31.042 ************************************ 00:21:31.042 04:58:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.042 04:58:45 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:21:31.042 04:58:45 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:21:31.042 04:58:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:31.042 04:58:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:31.042 04:58:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.042 ************************************ 00:21:31.042 START TEST spdk_dd_sparse 00:21:31.042 ************************************ 00:21:31.042 04:58:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:21:31.042 * Looking for test storage... 00:21:31.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:21:31.042 04:58:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.042 04:58:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.043 04:58:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.043 04:58:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.043 04:58:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:31.043 04:58:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:31.043 04:58:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:31.043 04:58:45 -- paths/export.sh@5 -- # export PATH 00:21:31.043 04:58:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:31.043 04:58:45 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:21:31.043 04:58:45 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:21:31.043 04:58:45 -- dd/sparse.sh@110 -- # file1=file_zero1 00:21:31.043 04:58:45 -- dd/sparse.sh@111 -- # file2=file_zero2 00:21:31.043 04:58:45 -- dd/sparse.sh@112 -- # file3=file_zero3 00:21:31.043 04:58:45 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:21:31.043 04:58:45 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:21:31.043 04:58:45 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:21:31.043 04:58:45 -- dd/sparse.sh@118 -- # prepare 00:21:31.043 04:58:45 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:21:31.043 04:58:45 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:21:31.043 1+0 records in 00:21:31.043 1+0 records out 00:21:31.043 4194304 bytes (4.2 MB) copied, 0.00726085 s, 578 MB/s 00:21:31.043 04:58:45 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:21:31.043 1+0 records in 00:21:31.043 1+0 records out 00:21:31.043 4194304 bytes (4.2 MB) copied, 0.00744105 s, 564 MB/s 00:21:31.043 04:58:45 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:21:31.043 1+0 records in 00:21:31.043 1+0 records out 00:21:31.043 4194304 bytes (4.2 MB) copied, 0.00723381 s, 580 MB/s 00:21:31.043 04:58:45 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:21:31.043 04:58:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:31.043 04:58:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:31.043 04:58:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.301 ************************************ 00:21:31.301 START TEST dd_sparse_file_to_file 00:21:31.301 ************************************ 00:21:31.301 04:58:45 -- common/autotest_common.sh@1104 -- # file_to_file 00:21:31.301 04:58:45 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:21:31.301 04:58:45 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:21:31.301 04:58:45 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:21:31.301 04:58:45 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:21:31.301 04:58:45 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:21:31.301 04:58:45 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:21:31.301 04:58:45 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:21:31.301 04:58:45 -- dd/sparse.sh@41 -- # gen_conf 00:21:31.301 04:58:45 -- dd/common.sh@31 -- # xtrace_disable 00:21:31.301 04:58:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.301 { 00:21:31.301 "subsystems": [ 00:21:31.301 { 00:21:31.301 "subsystem": "bdev", 00:21:31.301 "config": [ 00:21:31.301 { 00:21:31.301 "params": { 00:21:31.301 "block_size": 4096, 00:21:31.301 "name": "dd_aio", 00:21:31.301 "filename": "dd_sparse_aio_disk" 00:21:31.301 }, 00:21:31.301 "method": "bdev_aio_create" 00:21:31.301 }, 00:21:31.301 { 00:21:31.301 "params": { 00:21:31.301 "bdev_name": "dd_aio", 00:21:31.301 "lvs_name": "dd_lvstore" 00:21:31.301 }, 00:21:31.301 "method": "bdev_lvol_create_lvstore" 00:21:31.301 }, 00:21:31.301 { 00:21:31.301 "method": "bdev_wait_for_examine" 00:21:31.301 } 00:21:31.301 ] 00:21:31.301 } 00:21:31.301 ] 00:21:31.301 } 00:21:31.301 [2024-05-15 04:58:45.429983] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:31.301 [2024-05-15 04:58:45.430148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60836 ] 00:21:31.559 [2024-05-15 04:58:45.581574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.817 [2024-05-15 04:58:45.807464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.762  Copying: 12/36 [MB] (average 1000 MBps) 00:21:33.762 00:21:33.762 04:58:47 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:21:33.762 04:58:47 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:21:33.762 04:58:47 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:21:33.762 04:58:47 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:21:33.762 04:58:47 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:21:33.762 04:58:47 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:21:33.762 04:58:47 -- dd/sparse.sh@52 -- # stat1_b=24576 00:21:33.762 04:58:47 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:21:33.762 04:58:47 -- dd/sparse.sh@53 -- # stat2_b=24576 00:21:33.762 04:58:47 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:21:33.762 ************************************ 00:21:33.762 END TEST dd_sparse_file_to_file 00:21:33.762 ************************************ 00:21:33.762 00:21:33.762 real 0m2.480s 00:21:33.762 user 0m1.941s 00:21:33.762 sys 0m0.391s 00:21:33.762 04:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.762 04:58:47 -- common/autotest_common.sh@10 -- # set +x 00:21:33.762 04:58:47 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:21:33.762 04:58:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:33.762 04:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:33.762 04:58:47 -- common/autotest_common.sh@10 -- # set +x 00:21:33.762 ************************************ 00:21:33.762 START TEST dd_sparse_file_to_bdev 00:21:33.762 ************************************ 00:21:33.762 04:58:47 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:21:33.762 04:58:47 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:21:33.762 04:58:47 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:21:33.762 04:58:47 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:21:33.762 04:58:47 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:21:33.762 04:58:47 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:21:33.762 04:58:47 -- dd/sparse.sh@73 -- # gen_conf 00:21:33.763 04:58:47 -- dd/common.sh@31 -- # xtrace_disable 00:21:33.763 04:58:47 -- common/autotest_common.sh@10 -- # set +x 00:21:33.763 { 00:21:33.763 "subsystems": [ 00:21:33.763 { 00:21:33.763 "subsystem": "bdev", 00:21:33.763 "config": [ 00:21:33.763 { 00:21:33.763 "params": { 00:21:33.763 "block_size": 4096, 00:21:33.763 "name": "dd_aio", 00:21:33.763 "filename": "dd_sparse_aio_disk" 00:21:33.763 }, 00:21:33.763 "method": "bdev_aio_create" 00:21:33.763 }, 00:21:33.763 { 00:21:33.763 "params": { 00:21:33.763 "thin_provision": true, 00:21:33.763 "size": 37748736, 00:21:33.763 "lvol_name": "dd_lvol", 00:21:33.763 "lvs_name": "dd_lvstore" 00:21:33.763 }, 00:21:33.763 "method": "bdev_lvol_create" 00:21:33.763 }, 00:21:33.763 { 00:21:33.763 "method": "bdev_wait_for_examine" 00:21:33.763 } 00:21:33.763 ] 00:21:33.763 } 00:21:33.763 ] 00:21:33.763 } 00:21:33.763 [2024-05-15 04:58:47.956574] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:33.763 [2024-05-15 04:58:47.956842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60908 ] 00:21:34.021 [2024-05-15 04:58:48.105371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.281 [2024-05-15 04:58:48.329930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.848 [2024-05-15 04:58:48.784431] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:21:34.848  Copying: 12/36 [MB] (average 111 MBps)[2024-05-15 04:58:48.942566] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:21:36.224 00:21:36.224 00:21:36.224 ************************************ 00:21:36.224 END TEST dd_sparse_file_to_bdev 00:21:36.224 ************************************ 00:21:36.224 00:21:36.225 real 0m2.546s 00:21:36.225 user 0m2.046s 00:21:36.225 sys 0m0.353s 00:21:36.225 04:58:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.225 04:58:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.225 04:58:50 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:21:36.225 04:58:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:36.225 04:58:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:36.225 04:58:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.225 ************************************ 00:21:36.225 START TEST dd_sparse_bdev_to_file 00:21:36.225 ************************************ 00:21:36.225 04:58:50 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:21:36.225 04:58:50 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:21:36.225 04:58:50 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:21:36.225 04:58:50 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:21:36.225 04:58:50 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:21:36.225 04:58:50 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:21:36.225 04:58:50 -- dd/sparse.sh@91 -- # gen_conf 00:21:36.225 04:58:50 -- dd/common.sh@31 -- # xtrace_disable 00:21:36.225 04:58:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.483 { 00:21:36.483 "subsystems": [ 00:21:36.483 { 00:21:36.483 "subsystem": "bdev", 00:21:36.483 "config": [ 00:21:36.483 { 00:21:36.483 "params": { 00:21:36.483 "block_size": 4096, 00:21:36.484 "name": "dd_aio", 00:21:36.484 "filename": "dd_sparse_aio_disk" 00:21:36.484 }, 00:21:36.484 "method": "bdev_aio_create" 00:21:36.484 }, 00:21:36.484 { 00:21:36.484 "method": "bdev_wait_for_examine" 00:21:36.484 } 00:21:36.484 ] 00:21:36.484 } 00:21:36.484 ] 00:21:36.484 } 00:21:36.484 [2024-05-15 04:58:50.568779] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:36.484 [2024-05-15 04:58:50.568937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60966 ] 00:21:36.743 [2024-05-15 04:58:50.720692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.743 [2024-05-15 04:58:50.971599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.688  Copying: 12/36 [MB] (average 1200 MBps) 00:21:38.688 00:21:38.947 04:58:52 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:21:38.947 04:58:52 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:21:38.947 04:58:52 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:21:38.947 04:58:52 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:21:38.947 04:58:52 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:21:38.947 04:58:52 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:21:38.947 04:58:52 -- dd/sparse.sh@102 -- # stat2_b=24576 00:21:38.947 04:58:52 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:21:38.947 ************************************ 00:21:38.947 END TEST dd_sparse_bdev_to_file 00:21:38.947 ************************************ 00:21:38.947 04:58:52 -- dd/sparse.sh@103 -- # stat3_b=24576 00:21:38.947 04:58:52 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:21:38.947 00:21:38.947 real 0m2.521s 00:21:38.947 user 0m2.005s 00:21:38.947 sys 0m0.371s 00:21:38.947 04:58:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:38.947 04:58:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.947 04:58:52 -- dd/sparse.sh@1 -- # cleanup 00:21:38.947 04:58:52 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:21:38.948 04:58:52 -- dd/sparse.sh@12 -- # rm file_zero1 00:21:38.948 04:58:52 -- dd/sparse.sh@13 -- # rm file_zero2 00:21:38.948 04:58:52 -- dd/sparse.sh@14 -- # rm file_zero3 00:21:38.948 ************************************ 00:21:38.948 END TEST spdk_dd_sparse 00:21:38.948 ************************************ 00:21:38.948 00:21:38.948 real 0m7.875s 00:21:38.948 user 0m6.102s 00:21:38.948 sys 0m1.315s 00:21:38.948 04:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:38.948 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:38.948 04:58:53 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:21:38.948 04:58:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:38.948 04:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:38.948 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:38.948 ************************************ 00:21:38.948 START TEST spdk_dd_negative 00:21:38.948 ************************************ 00:21:38.948 04:58:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:21:38.948 * Looking for test storage... 00:21:38.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:21:38.948 04:58:53 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:38.948 04:58:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.948 04:58:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.948 04:58:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.948 04:58:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:38.948 04:58:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:38.948 04:58:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:38.948 04:58:53 -- paths/export.sh@5 -- # export PATH 00:21:38.948 04:58:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:38.948 04:58:53 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:21:38.948 04:58:53 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:21:38.948 04:58:53 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:21:38.948 04:58:53 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:21:38.948 04:58:53 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:21:38.948 04:58:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:38.948 04:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:38.948 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:38.948 ************************************ 00:21:38.948 START TEST dd_invalid_arguments 00:21:38.948 ************************************ 00:21:38.948 04:58:53 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:21:38.948 04:58:53 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:21:38.948 04:58:53 -- common/autotest_common.sh@640 -- # local es=0 00:21:38.948 04:58:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:21:38.948 04:58:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:38.948 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:38.948 04:58:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:38.948 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:38.948 04:58:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:38.948 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:38.948 04:58:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:38.948 04:58:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:38.948 04:58:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:21:39.207 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:21:39.207 options: 00:21:39.207 -c, --config JSON config file (default none) 00:21:39.207 --json JSON config file (default none) 00:21:39.207 --json-ignore-init-errors 00:21:39.207 don't exit on invalid config entry 00:21:39.207 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:21:39.207 -g, --single-file-segments 00:21:39.207 force creating just one hugetlbfs file 00:21:39.207 -h, --help show this usage 00:21:39.207 -i, --shm-id shared memory ID (optional) 00:21:39.207 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:21:39.207 --lcores lcore to CPU mapping list. The list is in the format: 00:21:39.207 [<,lcores[@CPUs]>...] 00:21:39.207 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:21:39.207 Within the group, '-' is used for range separator, 00:21:39.207 ',' is used for single number separator. 00:21:39.207 '( )' can be omitted for single element group, 00:21:39.207 '@' can be omitted if cpus and lcores have the same value 00:21:39.207 -n, --mem-channels channel number of memory channels used for DPDK 00:21:39.207 -p, --main-core main (primary) core for DPDK 00:21:39.207 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:21:39.207 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:21:39.207 --disable-cpumask-locks Disable CPU core lock files. 00:21:39.207 --silence-noticelog disable notice level logging to stderr 00:21:39.207 --msg-mempool-size global message memory pool size in count (default: 262143) 00:21:39.207 -u, --no-pci disable PCI access 00:21:39.207 --wait-for-rpc wait for RPCs to initialize subsystems 00:21:39.207 --max-delay maximum reactor delay (in microseconds) 00:21:39.207 -B, --pci-blocked pci addr to block (can be used more than once) 00:21:39.207 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:21:39.207 -R, --huge-unlink unlink huge files after initialization 00:21:39.207 -v, --version print SPDK version 00:21:39.207 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:21:39.207 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:21:39.207 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:21:39.207 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:21:39.207 Tracepoints vary in size and can use more than one trace entry. 00:21:39.207 --rpcs-allowed comma-separated list of permitted RPCS 00:21:39.207 --env-context Opaque context for use of the env implementation 00:21:39.207 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:21:39.207 --no-huge run without using hugepages 00:21:39.207 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_daos, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:21:39.207 -e, --tpoint-group [:] 00:21:39.207 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:21:39.207 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:21:39.207 Groups and masks can be c/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:21:39.207 [2024-05-15 04:58:53.314245] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:21:39.207 ombined (e.g. thread,bdev:0x1). 00:21:39.207 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:21:39.207 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:21:39.207 [--------- DD Options ---------] 00:21:39.207 --if Input file. Must specify either --if or --ib. 00:21:39.207 --ib Input bdev. Must specifier either --if or --ib 00:21:39.207 --of Output file. Must specify either --of or --ob. 00:21:39.207 --ob Output bdev. Must specify either --of or --ob. 00:21:39.207 --iflag Input file flags. 00:21:39.207 --oflag Output file flags. 00:21:39.207 --bs I/O unit size (default: 4096) 00:21:39.207 --qd Queue depth (default: 2) 00:21:39.207 --count I/O unit count. The number of I/O units to copy. (default: all) 00:21:39.208 --skip Skip this many I/O units at start of input. (default: 0) 00:21:39.208 --seek Skip this many I/O units at start of output. (default: 0) 00:21:39.208 --aio Force usage of AIO. (by default io_uring is used if available) 00:21:39.208 --sparse Enable hole skipping in input target 00:21:39.208 Available iflag and oflag values: 00:21:39.208 append - append mode 00:21:39.208 direct - use direct I/O for data 00:21:39.208 directory - fail unless a directory 00:21:39.208 dsync - use synchronized I/O for data 00:21:39.208 noatime - do not update access time 00:21:39.208 noctty - do not assign controlling terminal from file 00:21:39.208 nofollow - do not follow symlinks 00:21:39.208 nonblock - use non-blocking I/O 00:21:39.208 sync - use synchronized I/O for data and metadata 00:21:39.208 04:58:53 -- common/autotest_common.sh@643 -- # es=2 00:21:39.208 04:58:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:39.208 04:58:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:39.208 04:58:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:39.208 ************************************ 00:21:39.208 END TEST dd_invalid_arguments 00:21:39.208 ************************************ 00:21:39.208 00:21:39.208 real 0m0.171s 00:21:39.208 user 0m0.031s 00:21:39.208 sys 0m0.044s 00:21:39.208 04:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.208 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.208 04:58:53 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:21:39.208 04:58:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:39.208 04:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:39.208 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.208 ************************************ 00:21:39.208 START TEST dd_double_input 00:21:39.208 ************************************ 00:21:39.208 04:58:53 -- common/autotest_common.sh@1104 -- # double_input 00:21:39.208 04:58:53 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:21:39.208 04:58:53 -- common/autotest_common.sh@640 -- # local es=0 00:21:39.208 04:58:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:21:39.208 04:58:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.208 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.208 04:58:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.208 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.208 04:58:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.208 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.208 04:58:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.208 04:58:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:39.208 04:58:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:21:39.466 [2024-05-15 04:58:53.533135] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:21:39.466 ************************************ 00:21:39.466 END TEST dd_double_input 00:21:39.466 ************************************ 00:21:39.466 04:58:53 -- common/autotest_common.sh@643 -- # es=22 00:21:39.466 04:58:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:39.466 04:58:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:39.466 04:58:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:39.466 00:21:39.466 real 0m0.171s 00:21:39.466 user 0m0.035s 00:21:39.466 sys 0m0.040s 00:21:39.466 04:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.466 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.466 04:58:53 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:21:39.466 04:58:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:39.466 04:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:39.466 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.466 ************************************ 00:21:39.466 START TEST dd_double_output 00:21:39.466 ************************************ 00:21:39.466 04:58:53 -- common/autotest_common.sh@1104 -- # double_output 00:21:39.466 04:58:53 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:21:39.466 04:58:53 -- common/autotest_common.sh@640 -- # local es=0 00:21:39.466 04:58:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:21:39.466 04:58:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.466 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.466 04:58:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.466 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.466 04:58:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.466 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.466 04:58:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.466 04:58:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:39.466 04:58:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:21:39.724 [2024-05-15 04:58:53.759658] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:21:39.724 ************************************ 00:21:39.724 END TEST dd_double_output 00:21:39.724 ************************************ 00:21:39.724 04:58:53 -- common/autotest_common.sh@643 -- # es=22 00:21:39.724 04:58:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:39.724 04:58:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:39.724 04:58:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:39.724 00:21:39.724 real 0m0.171s 00:21:39.724 user 0m0.037s 00:21:39.724 sys 0m0.039s 00:21:39.724 04:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.724 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.724 04:58:53 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:21:39.724 04:58:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:39.724 04:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:39.724 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.724 ************************************ 00:21:39.724 START TEST dd_no_input 00:21:39.724 ************************************ 00:21:39.724 04:58:53 -- common/autotest_common.sh@1104 -- # no_input 00:21:39.724 04:58:53 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:21:39.724 04:58:53 -- common/autotest_common.sh@640 -- # local es=0 00:21:39.724 04:58:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:21:39.724 04:58:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.724 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.724 04:58:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.725 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.725 04:58:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.725 04:58:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.725 04:58:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.725 04:58:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:39.725 04:58:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:21:39.983 [2024-05-15 04:58:53.983697] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:21:39.983 ************************************ 00:21:39.983 END TEST dd_no_input 00:21:39.983 ************************************ 00:21:39.983 04:58:54 -- common/autotest_common.sh@643 -- # es=22 00:21:39.983 04:58:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:39.983 04:58:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:39.983 04:58:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:39.983 00:21:39.983 real 0m0.170s 00:21:39.983 user 0m0.033s 00:21:39.983 sys 0m0.040s 00:21:39.983 04:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.983 04:58:54 -- common/autotest_common.sh@10 -- # set +x 00:21:39.983 04:58:54 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:21:39.983 04:58:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:39.983 04:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:39.983 04:58:54 -- common/autotest_common.sh@10 -- # set +x 00:21:39.983 ************************************ 00:21:39.983 START TEST dd_no_output 00:21:39.983 ************************************ 00:21:39.983 04:58:54 -- common/autotest_common.sh@1104 -- # no_output 00:21:39.983 04:58:54 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:21:39.983 04:58:54 -- common/autotest_common.sh@640 -- # local es=0 00:21:39.983 04:58:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:21:39.983 04:58:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.983 04:58:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.983 04:58:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.983 04:58:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.983 04:58:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.983 04:58:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:39.983 04:58:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:39.983 04:58:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:39.983 04:58:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:21:39.983 [2024-05-15 04:58:54.207688] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:21:40.241 ************************************ 00:21:40.241 END TEST dd_no_output 00:21:40.241 ************************************ 00:21:40.241 04:58:54 -- common/autotest_common.sh@643 -- # es=22 00:21:40.241 04:58:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:40.241 04:58:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:40.241 04:58:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:40.241 00:21:40.241 real 0m0.170s 00:21:40.241 user 0m0.031s 00:21:40.241 sys 0m0.044s 00:21:40.241 04:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.241 04:58:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.241 04:58:54 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:21:40.241 04:58:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:40.241 04:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:40.241 04:58:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.241 ************************************ 00:21:40.241 START TEST dd_wrong_blocksize 00:21:40.241 ************************************ 00:21:40.241 04:58:54 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:21:40.241 04:58:54 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:21:40.241 04:58:54 -- common/autotest_common.sh@640 -- # local es=0 00:21:40.241 04:58:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:21:40.241 04:58:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.241 04:58:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.241 04:58:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.241 04:58:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.241 04:58:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.241 04:58:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.241 04:58:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.241 04:58:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:40.241 04:58:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:21:40.241 [2024-05-15 04:58:54.427845] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:21:40.241 ************************************ 00:21:40.241 END TEST dd_wrong_blocksize 00:21:40.241 ************************************ 00:21:40.241 04:58:54 -- common/autotest_common.sh@643 -- # es=22 00:21:40.241 04:58:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:40.241 04:58:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:40.241 04:58:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:40.241 00:21:40.241 real 0m0.170s 00:21:40.241 user 0m0.037s 00:21:40.241 sys 0m0.038s 00:21:40.241 04:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.241 04:58:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.500 04:58:54 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:21:40.500 04:58:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:40.500 04:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:40.500 04:58:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.500 ************************************ 00:21:40.500 START TEST dd_smaller_blocksize 00:21:40.500 ************************************ 00:21:40.500 04:58:54 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:21:40.500 04:58:54 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:21:40.500 04:58:54 -- common/autotest_common.sh@640 -- # local es=0 00:21:40.500 04:58:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:21:40.500 04:58:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.500 04:58:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.500 04:58:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.500 04:58:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.500 04:58:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.500 04:58:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.500 04:58:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.500 04:58:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:40.500 04:58:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:21:40.500 [2024-05-15 04:58:54.655591] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:40.500 [2024-05-15 04:58:54.655883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61252 ] 00:21:40.758 [2024-05-15 04:58:54.810168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.017 [2024-05-15 04:58:55.053998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.584 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:21:41.584 [2024-05-15 04:58:55.788804] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:21:41.584 [2024-05-15 04:58:55.788922] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:42.520 [2024-05-15 04:58:56.714930] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:21:43.088 ************************************ 00:21:43.088 END TEST dd_smaller_blocksize 00:21:43.088 ************************************ 00:21:43.088 04:58:57 -- common/autotest_common.sh@643 -- # es=244 00:21:43.088 04:58:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:43.088 04:58:57 -- common/autotest_common.sh@652 -- # es=116 00:21:43.088 04:58:57 -- common/autotest_common.sh@653 -- # case "$es" in 00:21:43.088 04:58:57 -- common/autotest_common.sh@660 -- # es=1 00:21:43.088 04:58:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:43.088 00:21:43.088 real 0m2.636s 00:21:43.088 user 0m1.888s 00:21:43.088 sys 0m0.549s 00:21:43.088 04:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.088 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.088 04:58:57 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:21:43.088 04:58:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:43.088 04:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:43.088 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.088 ************************************ 00:21:43.088 START TEST dd_invalid_count 00:21:43.088 ************************************ 00:21:43.088 04:58:57 -- common/autotest_common.sh@1104 -- # invalid_count 00:21:43.088 04:58:57 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:21:43.088 04:58:57 -- common/autotest_common.sh@640 -- # local es=0 00:21:43.088 04:58:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:21:43.088 04:58:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.088 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.088 04:58:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.088 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.088 04:58:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.088 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.088 04:58:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.088 04:58:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:43.088 04:58:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:21:43.346 [2024-05-15 04:58:57.345142] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:21:43.346 04:58:57 -- common/autotest_common.sh@643 -- # es=22 00:21:43.346 04:58:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:43.346 ************************************ 00:21:43.346 END TEST dd_invalid_count 00:21:43.346 ************************************ 00:21:43.346 04:58:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:43.346 04:58:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:43.346 00:21:43.346 real 0m0.173s 00:21:43.346 user 0m0.036s 00:21:43.346 sys 0m0.041s 00:21:43.346 04:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.346 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.346 04:58:57 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:21:43.346 04:58:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:43.346 04:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:43.346 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.346 ************************************ 00:21:43.346 START TEST dd_invalid_oflag 00:21:43.346 ************************************ 00:21:43.346 04:58:57 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:21:43.346 04:58:57 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:21:43.346 04:58:57 -- common/autotest_common.sh@640 -- # local es=0 00:21:43.346 04:58:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:21:43.346 04:58:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.346 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.346 04:58:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.346 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.346 04:58:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.346 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.346 04:58:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.346 04:58:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:43.346 04:58:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:21:43.604 [2024-05-15 04:58:57.578589] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:21:43.604 ************************************ 00:21:43.604 END TEST dd_invalid_oflag 00:21:43.604 ************************************ 00:21:43.604 04:58:57 -- common/autotest_common.sh@643 -- # es=22 00:21:43.604 04:58:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:43.604 04:58:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:43.604 04:58:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:43.604 00:21:43.604 real 0m0.176s 00:21:43.604 user 0m0.038s 00:21:43.604 sys 0m0.043s 00:21:43.604 04:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.604 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.604 04:58:57 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:21:43.604 04:58:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:43.604 04:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:43.604 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.604 ************************************ 00:21:43.604 START TEST dd_invalid_iflag 00:21:43.604 ************************************ 00:21:43.604 04:58:57 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:21:43.604 04:58:57 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:21:43.604 04:58:57 -- common/autotest_common.sh@640 -- # local es=0 00:21:43.604 04:58:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:21:43.604 04:58:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.604 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.604 04:58:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.604 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.604 04:58:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.604 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.604 04:58:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.604 04:58:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:43.604 04:58:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:21:43.604 [2024-05-15 04:58:57.804607] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:21:43.862 04:58:57 -- common/autotest_common.sh@643 -- # es=22 00:21:43.862 04:58:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:43.862 04:58:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:43.862 04:58:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:43.862 00:21:43.862 real 0m0.172s 00:21:43.862 user 0m0.039s 00:21:43.862 sys 0m0.037s 00:21:43.862 04:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.862 ************************************ 00:21:43.862 END TEST dd_invalid_iflag 00:21:43.862 ************************************ 00:21:43.862 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.862 04:58:57 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:21:43.862 04:58:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:43.862 04:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:43.862 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.862 ************************************ 00:21:43.862 START TEST dd_unknown_flag 00:21:43.862 ************************************ 00:21:43.862 04:58:57 -- common/autotest_common.sh@1104 -- # unknown_flag 00:21:43.862 04:58:57 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:21:43.862 04:58:57 -- common/autotest_common.sh@640 -- # local es=0 00:21:43.862 04:58:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:21:43.862 04:58:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.862 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.862 04:58:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.862 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.862 04:58:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.862 04:58:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.862 04:58:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.862 04:58:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:43.862 04:58:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:21:43.862 [2024-05-15 04:58:58.040227] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:43.863 [2024-05-15 04:58:58.040395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61399 ] 00:21:44.121 [2024-05-15 04:58:58.189257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.379 [2024-05-15 04:58:58.438188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.945 [2024-05-15 04:58:58.874858] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:21:44.945 [2024-05-15 04:58:58.874972] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:21:44.945 [2024-05-15 04:58:58.874996] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:21:44.945 [2024-05-15 04:58:58.875046] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:45.904 [2024-05-15 04:58:59.784524] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:21:46.162 04:59:00 -- common/autotest_common.sh@643 -- # es=234 00:21:46.163 04:59:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:46.163 04:59:00 -- common/autotest_common.sh@652 -- # es=106 00:21:46.163 04:59:00 -- common/autotest_common.sh@653 -- # case "$es" in 00:21:46.163 04:59:00 -- common/autotest_common.sh@660 -- # es=1 00:21:46.163 04:59:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:46.163 ************************************ 00:21:46.163 END TEST dd_unknown_flag 00:21:46.163 ************************************ 00:21:46.163 00:21:46.163 real 0m2.319s 00:21:46.163 user 0m1.801s 00:21:46.163 sys 0m0.321s 00:21:46.163 04:59:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.163 04:59:00 -- common/autotest_common.sh@10 -- # set +x 00:21:46.163 04:59:00 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:21:46.163 04:59:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:46.163 04:59:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:46.163 04:59:00 -- common/autotest_common.sh@10 -- # set +x 00:21:46.163 ************************************ 00:21:46.163 START TEST dd_invalid_json 00:21:46.163 ************************************ 00:21:46.163 04:59:00 -- common/autotest_common.sh@1104 -- # invalid_json 00:21:46.163 04:59:00 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:21:46.163 04:59:00 -- common/autotest_common.sh@640 -- # local es=0 00:21:46.163 04:59:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:21:46.163 04:59:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:46.163 04:59:00 -- dd/negative_dd.sh@95 -- # : 00:21:46.163 04:59:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:46.163 04:59:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:46.163 04:59:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:46.163 04:59:00 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:46.163 04:59:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:46.163 04:59:00 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:46.163 04:59:00 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:21:46.163 04:59:00 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:21:46.421 [2024-05-15 04:59:00.416445] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:46.421 [2024-05-15 04:59:00.416609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61452 ] 00:21:46.421 [2024-05-15 04:59:00.565609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.679 [2024-05-15 04:59:00.794102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.679 [2024-05-15 04:59:00.794301] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:21:46.679 [2024-05-15 04:59:00.794342] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:46.679 [2024-05-15 04:59:00.794400] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:21:47.247 ************************************ 00:21:47.247 END TEST dd_invalid_json 00:21:47.247 ************************************ 00:21:47.247 04:59:01 -- common/autotest_common.sh@643 -- # es=234 00:21:47.247 04:59:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:47.247 04:59:01 -- common/autotest_common.sh@652 -- # es=106 00:21:47.247 04:59:01 -- common/autotest_common.sh@653 -- # case "$es" in 00:21:47.247 04:59:01 -- common/autotest_common.sh@660 -- # es=1 00:21:47.247 04:59:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:47.247 00:21:47.247 real 0m0.948s 00:21:47.247 user 0m0.624s 00:21:47.247 sys 0m0.129s 00:21:47.247 04:59:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.247 04:59:01 -- common/autotest_common.sh@10 -- # set +x 00:21:47.247 00:21:47.247 real 0m8.208s 00:21:47.247 user 0m4.911s 00:21:47.247 sys 0m1.809s 00:21:47.247 04:59:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.247 04:59:01 -- common/autotest_common.sh@10 -- # set +x 00:21:47.247 ************************************ 00:21:47.247 END TEST spdk_dd_negative 00:21:47.247 ************************************ 00:21:47.247 ************************************ 00:21:47.247 END TEST spdk_dd 00:21:47.247 ************************************ 00:21:47.247 00:21:47.247 real 3m6.513s 00:21:47.247 user 2m23.089s 00:21:47.247 sys 0m27.628s 00:21:47.247 04:59:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.247 04:59:01 -- common/autotest_common.sh@10 -- # set +x 00:21:47.247 04:59:01 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@268 -- # timing_exit lib 00:21:47.247 04:59:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:47.247 04:59:01 -- common/autotest_common.sh@10 -- # set +x 00:21:47.247 04:59:01 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:47.247 04:59:01 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:47.247 04:59:01 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:47.247 04:59:01 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:47.247 04:59:01 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:21:47.247 04:59:01 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:21:47.247 04:59:01 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:21:47.247 04:59:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:47.247 04:59:01 -- common/autotest_common.sh@10 -- # set +x 00:21:47.247 04:59:01 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:21:47.247 04:59:01 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:21:47.247 04:59:01 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:21:47.247 04:59:01 -- common/autotest_common.sh@10 -- # set +x 00:21:48.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:21:48.182 Waiting for block devices as requested 00:21:48.440 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:21:48.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:21:48.699 Cleaning 00:21:48.699 Removing: /var/run/dpdk/spdk0/config 00:21:48.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:48.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:48.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:48.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:48.699 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:48.699 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:48.699 Removing: /dev/shm/spdk_tgt_trace.pid39048 00:21:48.699 Removing: /var/run/dpdk/spdk0 00:21:48.699 Removing: /var/run/dpdk/spdk_pid38773 00:21:48.699 Removing: /var/run/dpdk/spdk_pid39048 00:21:48.699 Removing: /var/run/dpdk/spdk_pid39379 00:21:48.699 Removing: /var/run/dpdk/spdk_pid39643 00:21:48.699 Removing: /var/run/dpdk/spdk_pid39838 00:21:48.699 Removing: /var/run/dpdk/spdk_pid39981 00:21:48.699 Removing: /var/run/dpdk/spdk_pid40111 00:21:48.699 Removing: /var/run/dpdk/spdk_pid40259 00:21:48.699 Removing: /var/run/dpdk/spdk_pid40382 00:21:48.699 Removing: /var/run/dpdk/spdk_pid40435 00:21:48.699 Removing: /var/run/dpdk/spdk_pid40485 00:21:48.699 Removing: /var/run/dpdk/spdk_pid40574 00:21:48.699 Removing: /var/run/dpdk/spdk_pid40728 00:21:48.699 Removing: /var/run/dpdk/spdk_pid40834 00:21:48.699 Removing: /var/run/dpdk/spdk_pid40932 00:21:48.699 Removing: /var/run/dpdk/spdk_pid40976 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41173 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41208 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41395 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41430 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41515 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41540 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41619 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41646 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41884 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41934 00:21:48.699 Removing: /var/run/dpdk/spdk_pid41979 00:21:48.699 Removing: /var/run/dpdk/spdk_pid42071 00:21:48.699 Removing: /var/run/dpdk/spdk_pid42167 00:21:48.699 Removing: /var/run/dpdk/spdk_pid42218 00:21:48.699 Removing: /var/run/dpdk/spdk_pid42318 00:21:48.699 Removing: /var/run/dpdk/spdk_pid42359 00:21:48.699 Removing: /var/run/dpdk/spdk_pid42418 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42463 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42516 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42562 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42616 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42662 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42728 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42759 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42818 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42862 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42921 00:21:48.958 Removing: /var/run/dpdk/spdk_pid42967 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43014 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43065 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43124 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43165 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43226 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43258 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43317 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43358 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43414 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43454 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43513 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43554 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43610 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43716 00:21:48.958 Removing: /var/run/dpdk/spdk_pid43869 00:21:48.958 Removing: /var/run/dpdk/spdk_pid44084 00:21:48.958 Removing: /var/run/dpdk/spdk_pid44197 00:21:48.958 Removing: /var/run/dpdk/spdk_pid44259 00:21:48.958 Removing: /var/run/dpdk/spdk_pid44423 00:21:48.958 Removing: /var/run/dpdk/spdk_pid44663 00:21:48.958 Removing: /var/run/dpdk/spdk_pid44893 00:21:48.958 Removing: /var/run/dpdk/spdk_pid45034 00:21:48.958 Removing: /var/run/dpdk/spdk_pid45186 00:21:48.958 Removing: /var/run/dpdk/spdk_pid45279 00:21:48.958 Removing: /var/run/dpdk/spdk_pid45317 00:21:48.958 Removing: /var/run/dpdk/spdk_pid45355 00:21:48.958 Removing: /var/run/dpdk/spdk_pid45853 00:21:48.958 Removing: /var/run/dpdk/spdk_pid45971 00:21:48.958 Removing: /var/run/dpdk/spdk_pid46109 00:21:48.958 Removing: /var/run/dpdk/spdk_pid46193 00:21:48.958 Removing: /var/run/dpdk/spdk_pid47118 00:21:48.958 Removing: /var/run/dpdk/spdk_pid48031 00:21:48.958 Removing: /var/run/dpdk/spdk_pid48939 00:21:48.958 Removing: /var/run/dpdk/spdk_pid50054 00:21:48.958 Removing: /var/run/dpdk/spdk_pid51146 00:21:48.958 Removing: /var/run/dpdk/spdk_pid52215 00:21:48.958 Removing: /var/run/dpdk/spdk_pid53697 00:21:48.958 Removing: /var/run/dpdk/spdk_pid54913 00:21:48.958 Removing: /var/run/dpdk/spdk_pid56106 00:21:48.958 Removing: /var/run/dpdk/spdk_pid56822 00:21:48.958 Removing: /var/run/dpdk/spdk_pid56892 00:21:48.958 Removing: /var/run/dpdk/spdk_pid56955 00:21:48.958 Removing: /var/run/dpdk/spdk_pid57033 00:21:48.958 Removing: /var/run/dpdk/spdk_pid57181 00:21:48.958 Removing: /var/run/dpdk/spdk_pid57337 00:21:48.958 Removing: /var/run/dpdk/spdk_pid57579 00:21:48.958 Removing: /var/run/dpdk/spdk_pid57836 00:21:48.958 Removing: /var/run/dpdk/spdk_pid57852 00:21:48.958 Removing: /var/run/dpdk/spdk_pid57910 00:21:48.958 Removing: /var/run/dpdk/spdk_pid57954 00:21:48.958 Removing: /var/run/dpdk/spdk_pid57989 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58033 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58064 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58104 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58139 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58178 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58211 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58252 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58286 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58319 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58358 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58390 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58433 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58476 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58508 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58541 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58607 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58640 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58687 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58785 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58842 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58874 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58933 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58960 00:21:48.958 Removing: /var/run/dpdk/spdk_pid58991 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59060 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59098 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59146 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59182 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59210 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59246 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59271 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59300 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59334 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59370 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59420 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59473 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59508 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59565 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59597 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59620 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59693 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59731 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59779 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59812 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59848 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59881 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59910 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59938 00:21:48.958 Removing: /var/run/dpdk/spdk_pid59970 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60002 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60105 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60202 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60365 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60399 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60462 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60526 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60570 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60607 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60648 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60697 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60738 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60836 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60908 00:21:48.958 Removing: /var/run/dpdk/spdk_pid60966 00:21:48.958 Removing: /var/run/dpdk/spdk_pid61252 00:21:48.958 Removing: /var/run/dpdk/spdk_pid61399 00:21:48.958 Removing: /var/run/dpdk/spdk_pid61452 00:21:48.958 Clean 00:21:49.217 killing process with pid 30571 00:21:49.217 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: 30571 Terminated "$rootdir/scripts/perf/pm/collect-cpu-load" -d "$output_dir/power" > /dev/null (wd: /home/vagrant/spdk_repo) 00:21:49.217 killing process with pid 30572 00:21:49.217 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: 30572 Terminated "$rootdir/scripts/perf/pm/collect-vmstat" -d "$output_dir/power" > /dev/null (wd: /home/vagrant/spdk_repo) 00:21:49.217 04:59:03 -- common/autotest_common.sh@1436 -- # return 0 00:21:49.217 04:59:03 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:21:49.217 04:59:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:49.217 04:59:03 -- common/autotest_common.sh@10 -- # set +x 00:21:49.217 04:59:03 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:21:49.217 04:59:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:49.217 04:59:03 -- common/autotest_common.sh@10 -- # set +x 00:21:49.217 04:59:03 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:49.217 04:59:03 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:49.217 04:59:03 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:49.217 04:59:03 -- spdk/autotest.sh@394 -- # hash lcov 00:21:49.217 04:59:03 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:49.217 04:59:03 -- spdk/autotest.sh@396 -- # hostname 00:21:49.217 04:59:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t centos7-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:49.475 geninfo: WARNING: invalid characters removed from testname! 00:22:36.145 04:59:47 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:39.431 04:59:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:41.358 04:59:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:43.890 04:59:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:46.424 05:00:00 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:49.707 05:00:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:52.287 05:00:06 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:52.287 05:00:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:52.287 05:00:06 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:52.287 05:00:06 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.287 05:00:06 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.287 05:00:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:22:52.287 05:00:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:22:52.287 05:00:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:22:52.287 05:00:06 -- paths/export.sh@5 -- $ export PATH 00:22:52.287 05:00:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:22:52.287 05:00:06 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:52.287 05:00:06 -- common/autobuild_common.sh@435 -- $ date +%s 00:22:52.287 05:00:06 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715749206.XXXXXX 00:22:52.287 05:00:06 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715749206.ncGpbz 00:22:52.287 05:00:06 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:22:52.287 05:00:06 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:22:52.287 05:00:06 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:52.287 05:00:06 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:52.287 05:00:06 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:52.287 05:00:06 -- common/autobuild_common.sh@451 -- $ get_config_params 00:22:52.287 05:00:06 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:22:52.287 05:00:06 -- common/autotest_common.sh@10 -- $ set +x 00:22:52.287 05:00:06 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos' 00:22:52.287 05:00:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:52.287 05:00:06 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:52.287 05:00:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:52.287 05:00:06 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:52.287 05:00:06 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:52.287 05:00:06 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:22:52.287 05:00:06 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:22:52.287 05:00:06 -- common/autotest_common.sh@10 -- $ set +x 00:22:52.287 05:00:06 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:22:52.287 05:00:06 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:22:52.287 05:00:06 -- spdk/autopackage.sh@40 -- $ get_config_params 00:22:52.287 05:00:06 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:22:52.287 05:00:06 -- common/autotest_common.sh@10 -- $ set +x 00:22:52.287 05:00:06 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:22:52.287 05:00:06 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos' 00:22:52.287 05:00:06 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos --enable-lto 00:22:52.287 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:52.287 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:52.287 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:22:52.544 Using 'verbs' RDMA provider 00:22:53.108 WARNING: ISA-L & DPDK crypto cannot be used as nasm ver must be 2.14 or newer. 00:22:53.108 Without ISA-L, there is no software support for crypto or compression, 00:22:53.108 so these features will be disabled. 00:22:53.366 Creating mk/config.mk...done. 00:22:53.366 Creating mk/cc.flags.mk...done. 00:22:53.366 Type 'make' to build. 00:22:53.366 05:00:07 -- spdk/autopackage.sh@43 -- $ make -j10 00:22:53.624 make[1]: Nothing to be done for 'all'. 00:22:57.876 The Meson build system 00:22:57.876 Version: 0.61.5 00:22:57.876 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:22:57.876 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:22:57.876 Build type: native build 00:22:57.876 Program cat found: YES (/bin/cat) 00:22:57.876 Project name: DPDK 00:22:57.876 Project version: 23.11.0 00:22:57.876 C compiler for the host machine: cc (gcc 10.2.1 "cc (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)") 00:22:57.876 C linker for the host machine: cc ld.bfd 2.35-5 00:22:57.876 Host machine cpu family: x86_64 00:22:57.876 Host machine cpu: x86_64 00:22:57.876 Message: ## Building in Developer Mode ## 00:22:57.876 Program pkg-config found: YES (/bin/pkg-config) 00:22:57.876 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:22:57.876 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:22:57.876 Program python3 found: YES (/usr/bin/python3) 00:22:57.876 Program cat found: YES (/bin/cat) 00:22:57.876 Compiler for C supports arguments -march=native: YES 00:22:57.876 Checking for size of "void *" : 8 00:22:57.876 Checking for size of "void *" : 8 00:22:57.876 Library m found: YES 00:22:57.876 Library numa found: YES 00:22:57.876 Has header "numaif.h" : YES 00:22:57.876 Library fdt found: NO 00:22:57.876 Library execinfo found: NO 00:22:57.876 Has header "execinfo.h" : YES 00:22:57.876 Found pkg-config: /bin/pkg-config (0.27.1) 00:22:57.876 Run-time dependency libarchive found: NO (tried pkgconfig) 00:22:57.876 Run-time dependency libbsd found: NO (tried pkgconfig) 00:22:57.876 Run-time dependency jansson found: NO (tried pkgconfig) 00:22:57.876 Run-time dependency openssl found: YES 1.0.2k 00:22:57.876 Run-time dependency libpcap found: NO (tried pkgconfig) 00:22:57.876 Library pcap found: NO 00:22:57.876 Compiler for C supports arguments -Wcast-qual: YES 00:22:57.876 Compiler for C supports arguments -Wdeprecated: YES 00:22:57.876 Compiler for C supports arguments -Wformat: YES 00:22:57.876 Compiler for C supports arguments -Wformat-nonliteral: NO 00:22:57.876 Compiler for C supports arguments -Wformat-security: NO 00:22:57.876 Compiler for C supports arguments -Wmissing-declarations: YES 00:22:57.876 Compiler for C supports arguments -Wmissing-prototypes: YES 00:22:57.876 Compiler for C supports arguments -Wnested-externs: YES 00:22:57.876 Compiler for C supports arguments -Wold-style-definition: YES 00:22:57.876 Compiler for C supports arguments -Wpointer-arith: YES 00:22:57.876 Compiler for C supports arguments -Wsign-compare: YES 00:22:57.876 Compiler for C supports arguments -Wstrict-prototypes: YES 00:22:57.876 Compiler for C supports arguments -Wundef: YES 00:22:57.876 Compiler for C supports arguments -Wwrite-strings: YES 00:22:57.876 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:22:57.876 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:22:57.876 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:22:57.876 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:22:57.876 Program objdump found: YES (/bin/objdump) 00:22:57.876 Compiler for C supports arguments -mavx512f: YES 00:22:57.876 Checking if "AVX512 checking" compiles: YES 00:22:57.876 Fetching value of define "__SSE4_2__" : 1 00:22:57.876 Fetching value of define "__AES__" : 1 00:22:57.876 Fetching value of define "__AVX__" : 1 00:22:57.876 Fetching value of define "__AVX2__" : 1 00:22:57.876 Fetching value of define "__AVX512BW__" : 1 00:22:57.876 Fetching value of define "__AVX512CD__" : 1 00:22:57.876 Fetching value of define "__AVX512DQ__" : 1 00:22:57.876 Fetching value of define "__AVX512F__" : 1 00:22:57.876 Fetching value of define "__AVX512VL__" : 1 00:22:57.876 Fetching value of define "__PCLMUL__" : 1 00:22:57.876 Fetching value of define "__RDRND__" : 1 00:22:57.876 Fetching value of define "__RDSEED__" : 1 00:22:57.876 Fetching value of define "__VPCLMULQDQ__" : 00:22:57.876 Fetching value of define "__znver1__" : 00:22:57.876 Fetching value of define "__znver2__" : 00:22:57.876 Fetching value of define "__znver3__" : 00:22:57.876 Fetching value of define "__znver4__" : 00:22:57.876 Compiler for C supports arguments -ffat-lto-objects: YES 00:22:57.876 Library asan found: YES 00:22:57.876 Compiler for C supports arguments -Wno-format-truncation: YES 00:22:57.876 Message: lib/log: Defining dependency "log" 00:22:57.876 Message: lib/kvargs: Defining dependency "kvargs" 00:22:57.876 Message: lib/telemetry: Defining dependency "telemetry" 00:22:57.876 Library rt found: YES 00:22:57.876 Checking for function "getentropy" : NO 00:22:57.876 Message: lib/eal: Defining dependency "eal" 00:22:57.876 Message: lib/ring: Defining dependency "ring" 00:22:57.876 Message: lib/rcu: Defining dependency "rcu" 00:22:57.876 Message: lib/mempool: Defining dependency "mempool" 00:22:57.876 Message: lib/mbuf: Defining dependency "mbuf" 00:22:57.876 Fetching value of define "__PCLMUL__" : 1 (cached) 00:22:57.876 Fetching value of define "__AVX512F__" : 1 (cached) 00:22:59.778 Fetching value of define "__AVX512BW__" : 1 (cached) 00:22:59.778 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:22:59.778 Fetching value of define "__AVX512VL__" : 1 (cached) 00:22:59.778 Fetching value of define "__VPCLMULQDQ__" : (cached) 00:22:59.778 Compiler for C supports arguments -mpclmul: YES 00:22:59.778 Compiler for C supports arguments -maes: YES 00:22:59.778 Compiler for C supports arguments -mavx512f: YES (cached) 00:22:59.778 Compiler for C supports arguments -mavx512bw: YES 00:22:59.778 Compiler for C supports arguments -mavx512dq: YES 00:22:59.778 Compiler for C supports arguments -mavx512vl: YES 00:22:59.778 Compiler for C supports arguments -mvpclmulqdq: YES 00:22:59.778 Compiler for C supports arguments -mavx2: YES 00:22:59.778 Compiler for C supports arguments -mavx: YES 00:22:59.778 Message: lib/net: Defining dependency "net" 00:22:59.778 Message: lib/meter: Defining dependency "meter" 00:22:59.778 Message: lib/ethdev: Defining dependency "ethdev" 00:22:59.778 Message: lib/pci: Defining dependency "pci" 00:22:59.778 Message: lib/cmdline: Defining dependency "cmdline" 00:22:59.778 Message: lib/hash: Defining dependency "hash" 00:22:59.778 Message: lib/timer: Defining dependency "timer" 00:22:59.778 Message: lib/compressdev: Defining dependency "compressdev" 00:22:59.778 Message: lib/cryptodev: Defining dependency "cryptodev" 00:22:59.778 Message: lib/dmadev: Defining dependency "dmadev" 00:22:59.778 Compiler for C supports arguments -Wno-cast-qual: YES 00:22:59.778 Message: lib/power: Defining dependency "power" 00:22:59.778 Message: lib/reorder: Defining dependency "reorder" 00:22:59.778 Message: lib/security: Defining dependency "security" 00:22:59.778 Has header "linux/userfaultfd.h" : YES 00:22:59.778 Has header "linux/vduse.h" : NO 00:22:59.778 Message: lib/vhost: Defining dependency "vhost" 00:22:59.778 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:22:59.778 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:22:59.778 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:22:59.778 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:22:59.778 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:22:59.778 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:22:59.778 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:22:59.778 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:22:59.778 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:22:59.778 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:22:59.778 Program doxygen found: YES (/bin/doxygen) 00:22:59.778 Configuring doxy-api-html.conf using configuration 00:22:59.778 Configuring doxy-api-man.conf using configuration 00:22:59.778 Program mandb found: YES (/bin/mandb) 00:22:59.778 Program sphinx-build found: NO 00:22:59.778 Configuring rte_build_config.h using configuration 00:22:59.778 Message: 00:22:59.778 ================= 00:22:59.778 Applications Enabled 00:22:59.778 ================= 00:22:59.778 00:22:59.778 apps: 00:22:59.778 00:22:59.778 00:22:59.778 Message: 00:22:59.778 ================= 00:22:59.778 Libraries Enabled 00:22:59.778 ================= 00:22:59.778 00:22:59.778 libs: 00:22:59.778 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:22:59.778 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:22:59.778 cryptodev, dmadev, power, reorder, security, vhost, 00:22:59.778 00:22:59.778 Message: 00:22:59.778 =============== 00:22:59.778 Drivers Enabled 00:22:59.778 =============== 00:22:59.778 00:22:59.778 common: 00:22:59.778 00:22:59.778 bus: 00:22:59.778 pci, vdev, 00:22:59.778 mempool: 00:22:59.778 ring, 00:22:59.778 dma: 00:22:59.778 00:22:59.778 net: 00:22:59.778 00:22:59.778 crypto: 00:22:59.778 00:22:59.778 compress: 00:22:59.778 00:22:59.778 vdpa: 00:22:59.778 00:22:59.778 00:22:59.778 Message: 00:22:59.778 ================= 00:22:59.778 Content Skipped 00:22:59.778 ================= 00:22:59.778 00:22:59.778 apps: 00:22:59.778 dumpcap: explicitly disabled via build config 00:22:59.778 graph: explicitly disabled via build config 00:22:59.778 pdump: explicitly disabled via build config 00:22:59.778 proc-info: explicitly disabled via build config 00:22:59.778 test-acl: explicitly disabled via build config 00:22:59.778 test-bbdev: explicitly disabled via build config 00:22:59.778 test-cmdline: explicitly disabled via build config 00:22:59.778 test-compress-perf: explicitly disabled via build config 00:22:59.778 test-crypto-perf: explicitly disabled via build config 00:22:59.778 test-dma-perf: explicitly disabled via build config 00:22:59.778 test-eventdev: explicitly disabled via build config 00:22:59.778 test-fib: explicitly disabled via build config 00:22:59.779 test-flow-perf: explicitly disabled via build config 00:22:59.779 test-gpudev: explicitly disabled via build config 00:22:59.779 test-mldev: explicitly disabled via build config 00:22:59.779 test-pipeline: explicitly disabled via build config 00:22:59.779 test-pmd: explicitly disabled via build config 00:22:59.779 test-regex: explicitly disabled via build config 00:22:59.779 test-sad: explicitly disabled via build config 00:22:59.779 test-security-perf: explicitly disabled via build config 00:22:59.779 00:22:59.779 libs: 00:22:59.779 metrics: explicitly disabled via build config 00:22:59.779 acl: explicitly disabled via build config 00:22:59.779 bbdev: explicitly disabled via build config 00:22:59.779 bitratestats: explicitly disabled via build config 00:22:59.779 bpf: explicitly disabled via build config 00:22:59.779 cfgfile: explicitly disabled via build config 00:22:59.779 distributor: explicitly disabled via build config 00:22:59.779 efd: explicitly disabled via build config 00:22:59.779 eventdev: explicitly disabled via build config 00:22:59.779 dispatcher: explicitly disabled via build config 00:22:59.779 gpudev: explicitly disabled via build config 00:22:59.779 gro: explicitly disabled via build config 00:22:59.779 gso: explicitly disabled via build config 00:22:59.779 ip_frag: explicitly disabled via build config 00:22:59.779 jobstats: explicitly disabled via build config 00:22:59.779 latencystats: explicitly disabled via build config 00:22:59.779 lpm: explicitly disabled via build config 00:22:59.779 member: explicitly disabled via build config 00:22:59.779 pcapng: explicitly disabled via build config 00:22:59.779 rawdev: explicitly disabled via build config 00:22:59.779 regexdev: explicitly disabled via build config 00:22:59.779 mldev: explicitly disabled via build config 00:22:59.779 rib: explicitly disabled via build config 00:22:59.779 sched: explicitly disabled via build config 00:22:59.779 stack: explicitly disabled via build config 00:22:59.779 ipsec: explicitly disabled via build config 00:22:59.779 pdcp: explicitly disabled via build config 00:22:59.779 fib: explicitly disabled via build config 00:22:59.779 port: explicitly disabled via build config 00:22:59.779 pdump: explicitly disabled via build config 00:22:59.779 table: explicitly disabled via build config 00:22:59.779 pipeline: explicitly disabled via build config 00:22:59.779 graph: explicitly disabled via build config 00:22:59.779 node: explicitly disabled via build config 00:22:59.779 00:22:59.779 drivers: 00:22:59.779 common/cpt: not in enabled drivers build config 00:22:59.779 common/dpaax: not in enabled drivers build config 00:22:59.779 common/iavf: not in enabled drivers build config 00:22:59.779 common/idpf: not in enabled drivers build config 00:22:59.779 common/mvep: not in enabled drivers build config 00:22:59.779 common/octeontx: not in enabled drivers build config 00:22:59.779 bus/auxiliary: not in enabled drivers build config 00:22:59.779 bus/cdx: not in enabled drivers build config 00:22:59.779 bus/dpaa: not in enabled drivers build config 00:22:59.779 bus/fslmc: not in enabled drivers build config 00:22:59.779 bus/ifpga: not in enabled drivers build config 00:22:59.779 bus/platform: not in enabled drivers build config 00:22:59.779 bus/vmbus: not in enabled drivers build config 00:22:59.779 common/cnxk: not in enabled drivers build config 00:22:59.779 common/mlx5: not in enabled drivers build config 00:22:59.779 common/nfp: not in enabled drivers build config 00:22:59.779 common/qat: not in enabled drivers build config 00:22:59.779 common/sfc_efx: not in enabled drivers build config 00:22:59.779 mempool/bucket: not in enabled drivers build config 00:22:59.779 mempool/cnxk: not in enabled drivers build config 00:22:59.779 mempool/dpaa: not in enabled drivers build config 00:22:59.779 mempool/dpaa2: not in enabled drivers build config 00:22:59.779 mempool/octeontx: not in enabled drivers build config 00:22:59.779 mempool/stack: not in enabled drivers build config 00:22:59.779 dma/cnxk: not in enabled drivers build config 00:22:59.779 dma/dpaa: not in enabled drivers build config 00:22:59.779 dma/dpaa2: not in enabled drivers build config 00:22:59.779 dma/hisilicon: not in enabled drivers build config 00:22:59.779 dma/idxd: not in enabled drivers build config 00:22:59.779 dma/ioat: not in enabled drivers build config 00:22:59.779 dma/skeleton: not in enabled drivers build config 00:22:59.779 net/af_packet: not in enabled drivers build config 00:22:59.779 net/af_xdp: not in enabled drivers build config 00:22:59.779 net/ark: not in enabled drivers build config 00:22:59.779 net/atlantic: not in enabled drivers build config 00:22:59.779 net/avp: not in enabled drivers build config 00:22:59.779 net/axgbe: not in enabled drivers build config 00:22:59.779 net/bnx2x: not in enabled drivers build config 00:22:59.779 net/bnxt: not in enabled drivers build config 00:22:59.779 net/bonding: not in enabled drivers build config 00:22:59.779 net/cnxk: not in enabled drivers build config 00:22:59.779 net/cpfl: not in enabled drivers build config 00:22:59.779 net/cxgbe: not in enabled drivers build config 00:22:59.779 net/dpaa: not in enabled drivers build config 00:22:59.779 net/dpaa2: not in enabled drivers build config 00:22:59.779 net/e1000: not in enabled drivers build config 00:22:59.779 net/ena: not in enabled drivers build config 00:22:59.779 net/enetc: not in enabled drivers build config 00:22:59.779 net/enetfec: not in enabled drivers build config 00:22:59.779 net/enic: not in enabled drivers build config 00:22:59.779 net/failsafe: not in enabled drivers build config 00:22:59.779 net/fm10k: not in enabled drivers build config 00:22:59.779 net/gve: not in enabled drivers build config 00:22:59.779 net/hinic: not in enabled drivers build config 00:22:59.779 net/hns3: not in enabled drivers build config 00:22:59.779 net/i40e: not in enabled drivers build config 00:22:59.779 net/iavf: not in enabled drivers build config 00:22:59.779 net/ice: not in enabled drivers build config 00:22:59.779 net/idpf: not in enabled drivers build config 00:22:59.779 net/igc: not in enabled drivers build config 00:22:59.779 net/ionic: not in enabled drivers build config 00:22:59.779 net/ipn3ke: not in enabled drivers build config 00:22:59.779 net/ixgbe: not in enabled drivers build config 00:22:59.779 net/mana: not in enabled drivers build config 00:22:59.779 net/memif: not in enabled drivers build config 00:22:59.779 net/mlx4: not in enabled drivers build config 00:22:59.779 net/mlx5: not in enabled drivers build config 00:22:59.779 net/mvneta: not in enabled drivers build config 00:22:59.779 net/mvpp2: not in enabled drivers build config 00:22:59.779 net/netvsc: not in enabled drivers build config 00:22:59.779 net/nfb: not in enabled drivers build config 00:22:59.779 net/nfp: not in enabled drivers build config 00:22:59.779 net/ngbe: not in enabled drivers build config 00:22:59.779 net/null: not in enabled drivers build config 00:22:59.779 net/octeontx: not in enabled drivers build config 00:22:59.779 net/octeon_ep: not in enabled drivers build config 00:22:59.779 net/pcap: not in enabled drivers build config 00:22:59.779 net/pfe: not in enabled drivers build config 00:22:59.779 net/qede: not in enabled drivers build config 00:22:59.779 net/ring: not in enabled drivers build config 00:22:59.779 net/sfc: not in enabled drivers build config 00:22:59.779 net/softnic: not in enabled drivers build config 00:22:59.779 net/tap: not in enabled drivers build config 00:22:59.779 net/thunderx: not in enabled drivers build config 00:22:59.779 net/txgbe: not in enabled drivers build config 00:22:59.779 net/vdev_netvsc: not in enabled drivers build config 00:22:59.779 net/vhost: not in enabled drivers build config 00:22:59.779 net/virtio: not in enabled drivers build config 00:22:59.779 net/vmxnet3: not in enabled drivers build config 00:22:59.779 raw/*: missing internal dependency, "rawdev" 00:22:59.779 crypto/armv8: not in enabled drivers build config 00:22:59.779 crypto/bcmfs: not in enabled drivers build config 00:22:59.779 crypto/caam_jr: not in enabled drivers build config 00:22:59.779 crypto/ccp: not in enabled drivers build config 00:22:59.779 crypto/cnxk: not in enabled drivers build config 00:22:59.779 crypto/dpaa_sec: not in enabled drivers build config 00:22:59.779 crypto/dpaa2_sec: not in enabled drivers build config 00:22:59.779 crypto/ipsec_mb: not in enabled drivers build config 00:22:59.779 crypto/mlx5: not in enabled drivers build config 00:22:59.779 crypto/mvsam: not in enabled drivers build config 00:22:59.779 crypto/nitrox: not in enabled drivers build config 00:22:59.779 crypto/null: not in enabled drivers build config 00:22:59.779 crypto/octeontx: not in enabled drivers build config 00:22:59.779 crypto/openssl: not in enabled drivers build config 00:22:59.779 crypto/scheduler: not in enabled drivers build config 00:22:59.780 crypto/uadk: not in enabled drivers build config 00:22:59.780 crypto/virtio: not in enabled drivers build config 00:22:59.780 compress/isal: not in enabled drivers build config 00:22:59.780 compress/mlx5: not in enabled drivers build config 00:22:59.780 compress/octeontx: not in enabled drivers build config 00:22:59.780 compress/zlib: not in enabled drivers build config 00:22:59.780 regex/*: missing internal dependency, "regexdev" 00:22:59.780 ml/*: missing internal dependency, "mldev" 00:22:59.780 vdpa/ifc: not in enabled drivers build config 00:22:59.780 vdpa/mlx5: not in enabled drivers build config 00:22:59.780 vdpa/nfp: not in enabled drivers build config 00:22:59.780 vdpa/sfc: not in enabled drivers build config 00:22:59.780 event/*: missing internal dependency, "eventdev" 00:22:59.780 baseband/*: missing internal dependency, "bbdev" 00:22:59.780 gpu/*: missing internal dependency, "gpudev" 00:22:59.780 00:22:59.780 00:23:00.037 Build targets in project: 85 00:23:00.037 00:23:00.037 DPDK 23.11.0 00:23:00.037 00:23:00.037 User defined options 00:23:00.037 default_library : static 00:23:00.037 libdir : lib 00:23:00.037 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:00.037 b_lto : true 00:23:00.037 b_sanitize : address 00:23:00.037 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:23:00.037 c_link_args : 00:23:00.037 cpu_instruction_set: native 00:23:00.037 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:23:00.037 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:23:00.037 enable_docs : false 00:23:00.037 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:23:00.037 enable_kmods : false 00:23:00.037 tests : false 00:23:00.037 00:23:00.037 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:00.037 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:23:00.602 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:23:00.602 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:23:00.602 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:23:00.602 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:23:00.860 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:23:00.860 [5/264] Linking static target lib/librte_kvargs.a 00:23:00.860 [6/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:23:00.860 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:23:00.860 [8/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:23:00.860 [9/264] Linking static target lib/librte_log.a 00:23:00.860 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:23:00.860 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:23:00.860 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:23:00.860 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:23:01.118 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:23:01.118 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:23:01.118 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:23:01.377 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:23:01.377 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:23:01.377 [19/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:23:01.377 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:23:01.377 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:23:01.377 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:23:01.635 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:23:01.635 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:23:01.635 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:23:01.635 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:23:01.635 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:23:01.635 [28/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:23:01.893 [29/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:23:01.893 [30/264] Linking static target lib/librte_telemetry.a 00:23:01.894 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:23:01.894 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:23:01.894 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:23:01.894 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:23:01.894 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:23:01.894 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:23:01.894 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:23:02.152 [38/264] Linking target lib/librte_log.so.24.0 00:23:02.152 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:23:02.152 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:23:02.411 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:23:02.411 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:23:02.411 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:23:02.411 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:23:02.411 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:23:02.411 [46/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:23:02.411 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:23:02.669 [48/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:23:02.669 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:23:02.669 [50/264] Linking target lib/librte_kvargs.so.24.0 00:23:02.669 [51/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:23:02.669 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:23:02.669 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:23:02.669 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:23:02.669 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:23:02.669 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:23:02.927 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:23:02.927 [58/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:23:02.927 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:23:02.927 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:23:02.927 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:23:02.927 [62/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:23:02.927 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:23:02.927 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:23:03.185 [65/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:23:03.185 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:23:03.185 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:23:03.185 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:23:03.185 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:23:03.185 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:23:03.443 [71/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:23:03.443 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:23:03.443 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:23:03.443 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:23:03.443 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:23:03.443 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:23:03.443 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:23:03.701 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:23:03.701 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:23:03.701 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:23:03.701 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:23:03.701 [82/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:23:03.701 [83/264] Linking static target lib/librte_ring.a 00:23:03.701 [84/264] Linking target lib/librte_telemetry.so.24.0 00:23:03.959 [85/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:23:03.959 [86/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:23:03.959 [87/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:23:03.959 [88/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:23:04.217 [89/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:23:04.217 [90/264] Linking static target lib/librte_eal.a 00:23:04.217 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:23:04.217 [92/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:23:04.217 [93/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:23:04.217 [94/264] Linking static target lib/librte_mempool.a 00:23:04.477 [95/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:23:04.477 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:23:04.477 [97/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:23:04.477 [98/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:23:04.477 [99/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:23:04.477 [100/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:23:04.477 [101/264] Linking static target lib/librte_rcu.a 00:23:04.477 [102/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:23:04.735 [103/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:23:04.735 [104/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:23:04.735 [105/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:23:04.735 [106/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:23:04.735 [107/264] Linking static target lib/librte_meter.a 00:23:04.735 [108/264] Linking static target lib/librte_net.a 00:23:04.993 [109/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:23:04.993 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:23:04.993 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:23:05.250 [112/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:23:05.250 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:23:05.250 [114/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:23:05.250 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:23:05.250 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:23:05.508 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:23:05.508 [118/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:23:05.508 [119/264] Linking static target lib/librte_mbuf.a 00:23:05.767 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:23:05.767 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:23:06.025 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:23:06.025 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:23:06.025 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:23:06.284 [125/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:23:06.284 [126/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:23:06.284 [127/264] Linking static target lib/librte_pci.a 00:23:06.284 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:23:06.284 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:23:06.284 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:23:06.284 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:23:06.542 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:23:06.542 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:23:06.542 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:23:06.542 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:23:06.542 [136/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:23:06.542 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:23:06.542 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:23:06.542 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:23:06.542 [140/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:23:06.542 [141/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:06.542 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:23:06.542 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:23:06.800 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:23:06.800 [145/264] Linking static target lib/librte_cmdline.a 00:23:06.800 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:23:07.059 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:23:07.059 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:23:07.316 [149/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:23:07.316 [150/264] Linking static target lib/librte_timer.a 00:23:07.316 [151/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:23:07.316 [152/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:23:07.316 [153/264] Linking static target lib/librte_compressdev.a 00:23:07.316 [154/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:23:07.573 [155/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:23:07.573 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:23:07.831 [157/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:23:07.831 [158/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:23:07.831 [159/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:23:08.090 [160/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:23:08.090 [161/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:23:08.090 [162/264] Linking static target lib/librte_dmadev.a 00:23:08.090 [163/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:23:08.090 [164/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:08.090 [165/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:23:08.347 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:23:08.347 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:23:08.604 [168/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:23:08.604 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:23:08.604 [170/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:23:08.604 [171/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:23:08.862 [172/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:23:08.862 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:08.862 [174/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:23:09.118 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:23:09.118 [176/264] Linking static target lib/librte_power.a 00:23:09.118 [177/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:23:09.118 [178/264] Linking static target lib/librte_security.a 00:23:09.118 [179/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:23:09.118 [180/264] Linking static target lib/librte_reorder.a 00:23:09.376 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:23:09.376 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:23:09.376 [183/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:23:09.634 [184/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:23:09.634 [185/264] Linking static target lib/librte_cryptodev.a 00:23:09.634 [186/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:23:09.634 [187/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:23:09.634 [188/264] Linking static target lib/librte_ethdev.a 00:23:09.892 [189/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:23:09.892 [190/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:23:10.458 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:23:10.458 [192/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:23:10.716 [193/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:23:10.716 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:23:10.716 [195/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:23:10.716 [196/264] Linking static target lib/librte_hash.a 00:23:10.973 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:23:10.973 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:23:11.231 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:23:11.231 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:23:11.231 [201/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:11.489 [202/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:23:11.489 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:23:11.489 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:23:11.489 [205/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:23:11.489 [206/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:23:11.747 [207/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:23:11.747 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:23:11.747 [209/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:23:11.747 [210/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:23:11.747 [211/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:11.747 [212/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:11.747 [213/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:11.747 [214/264] Linking static target drivers/librte_bus_vdev.a 00:23:11.747 [215/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:11.747 [216/264] Linking static target drivers/librte_bus_pci.a 00:23:12.006 [217/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:23:12.006 [218/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:23:12.264 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:23:12.264 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:12.264 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:12.264 [222/264] Linking static target drivers/librte_mempool_ring.a 00:23:12.264 [223/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:12.522 [224/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:15.831 [225/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:22.449 [226/264] Linking target lib/librte_eal.so.24.0 00:23:22.449 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:23:22.449 [228/264] Linking target lib/librte_pci.so.24.0 00:23:22.449 [229/264] Linking target lib/librte_ring.so.24.0 00:23:22.449 [230/264] Linking target lib/librte_meter.so.24.0 00:23:22.449 [231/264] Linking target drivers/librte_bus_vdev.so.24.0 00:23:22.449 [232/264] Linking target lib/librte_timer.so.24.0 00:23:22.449 [233/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:23:22.449 [234/264] Linking target lib/librte_dmadev.so.24.0 00:23:22.449 [235/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:23:22.449 [236/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:23:22.706 [237/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:23:22.706 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:23:23.272 [239/264] Linking target lib/librte_rcu.so.24.0 00:23:23.272 [240/264] Linking target lib/librte_mempool.so.24.0 00:23:23.529 [241/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:23:23.529 [242/264] Linking target drivers/librte_bus_pci.so.24.0 00:23:23.529 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:23:24.096 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:23:25.029 [245/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:23:25.287 [246/264] Linking target lib/librte_mbuf.so.24.0 00:23:25.853 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:23:26.110 [248/264] Linking target lib/librte_reorder.so.24.0 00:23:26.368 [249/264] Linking target lib/librte_compressdev.so.24.0 00:23:26.625 [250/264] Linking target lib/librte_net.so.24.0 00:23:27.191 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:23:27.758 [252/264] Linking target lib/librte_cryptodev.so.24.0 00:23:27.758 In function '_mm256_storeu_si256', 00:23:27.758 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:347:2, 00:23:27.758 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../lib/eal/x86/include/rte_memcpy.h:868:10: 00:23:27.758 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:23:27.758 928 | *__P = __A; 00:23:27.758 | ^ 00:23:27.758 ../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:23:27.758 ../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:23:27.758 156 | uint8_t driver_priv_data[0]; 00:23:27.758 | ^ 00:23:27.758 In function '_mm_storeu_si128', 00:23:27.758 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:23:27.758 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../lib/eal/x86/include/rte_memcpy.h:868:10: 00:23:27.758 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:23:27.758 727 | *__P = __B; 00:23:27.758 | ^ 00:23:27.758 ../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:23:27.758 ../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:23:27.758 156 | uint8_t driver_priv_data[0]; 00:23:27.758 | ^ 00:23:27.758 In function '_mm_storeu_si128', 00:23:27.758 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:23:27.758 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../lib/eal/x86/include/rte_memcpy.h:868:10: 00:23:27.758 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:23:27.758 727 | *__P = __B; 00:23:27.758 | ^ 00:23:27.758 ../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:23:27.758 ../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:23:27.758 156 | uint8_t driver_priv_data[0]; 00:23:27.758 | ^ 00:23:27.758 In function '_mm256_storeu_si256', 00:23:27.758 inlined from 'rte_memcpy_aligned' at ../lib/eal/x86/include/rte_memcpy.h:347:2, 00:23:27.758 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../lib/eal/x86/include/rte_memcpy.h:866:10: 00:23:27.758 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:23:27.758 928 | *__P = __A; 00:23:27.758 | ^ 00:23:27.758 ../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:23:27.758 ../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:23:27.758 156 | uint8_t driver_priv_data[0]; 00:23:27.758 | ^ 00:23:27.758 In function '_mm_storeu_si128', 00:23:27.758 inlined from 'rte_memcpy_aligned' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:23:27.758 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../lib/eal/x86/include/rte_memcpy.h:866:10: 00:23:27.758 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:23:27.758 727 | *__P = __B; 00:23:27.758 | ^ 00:23:27.758 ../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:23:27.758 ../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:23:27.758 156 | uint8_t driver_priv_data[0]; 00:23:27.758 | ^ 00:23:28.324 [253/264] Linking target lib/librte_cmdline.so.24.0 00:23:28.324 [254/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:23:28.581 [255/264] Linking target lib/librte_security.so.24.0 00:23:31.105 [256/264] Linking target lib/librte_hash.so.24.0 00:23:31.105 In function '_mm256_storeu_si256', 00:23:31.105 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:347:2, 00:23:31.105 inlined from 'rte_thash_init_ctx' at ../lib/eal/x86/include/rte_memcpy.h:868:10, 00:23:31.105 inlined from 'rte_thash_init_ctx' at ../lib/hash/rte_thash.c:211:1: 00:23:31.105 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:23:31.105 928 | *__P = __A; 00:23:31.105 | ^ 00:23:31.105 ../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:23:31.105 ../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:23:31.105 91 | uint8_t hash_key[0]; 00:23:31.105 | ^ 00:23:31.105 In function '_mm_storeu_si128', 00:23:31.105 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:23:31.105 inlined from 'rte_thash_init_ctx' at ../lib/eal/x86/include/rte_memcpy.h:868:10, 00:23:31.105 inlined from 'rte_thash_init_ctx' at ../lib/hash/rte_thash.c:211:1: 00:23:31.105 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:23:31.105 727 | *__P = __B; 00:23:31.105 | ^ 00:23:31.105 ../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:23:31.105 ../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:23:31.105 91 | uint8_t hash_key[0]; 00:23:31.105 | ^ 00:23:31.105 In function '_mm_storeu_si128', 00:23:31.105 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:23:31.105 inlined from 'rte_thash_init_ctx' at ../lib/eal/x86/include/rte_memcpy.h:868:10, 00:23:31.105 inlined from 'rte_thash_init_ctx' at ../lib/hash/rte_thash.c:211:1: 00:23:31.105 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:23:31.105 727 | *__P = __B; 00:23:31.105 | ^ 00:23:31.105 ../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:23:31.105 ../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:23:31.105 91 | uint8_t hash_key[0]; 00:23:31.105 | ^ 00:23:31.105 In function '_mm256_storeu_si256', 00:23:31.105 inlined from 'rte_memcpy_aligned' at ../lib/eal/x86/include/rte_memcpy.h:347:2, 00:23:31.105 inlined from 'rte_thash_init_ctx' at ../lib/eal/x86/include/rte_memcpy.h:866:10, 00:23:31.105 inlined from 'rte_thash_init_ctx' at ../lib/hash/rte_thash.c:211:1: 00:23:31.105 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:23:31.105 928 | *__P = __A; 00:23:31.105 | ^ 00:23:31.105 ../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:23:31.105 ../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:23:31.105 91 | uint8_t hash_key[0]; 00:23:31.105 | ^ 00:23:31.105 In function '_mm_storeu_si128', 00:23:31.105 inlined from 'rte_memcpy_aligned' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:23:31.105 inlined from 'rte_thash_init_ctx' at ../lib/eal/x86/include/rte_memcpy.h:866:10, 00:23:31.105 inlined from 'rte_thash_init_ctx' at ../lib/hash/rte_thash.c:211:1: 00:23:31.105 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:23:31.105 727 | *__P = __B; 00:23:31.105 | ^ 00:23:31.105 ../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:23:31.105 ../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:23:31.105 91 | uint8_t hash_key[0]; 00:23:31.105 | ^ 00:23:31.671 [257/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:23:38.227 [258/264] Linking target lib/librte_ethdev.so.24.0 00:23:38.227 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:23:40.124 [260/264] Linking target lib/librte_power.so.24.0 00:23:48.234 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:23:48.234 [262/264] Linking static target lib/librte_vhost.a 00:23:49.165 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:24:45.402 [264/264] Linking target lib/librte_vhost.so.24.0 00:24:45.402 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:24:45.402 CC lib/log/log.o 00:24:45.402 CC lib/ut/ut.o 00:24:45.402 CC lib/ut_mock/mock.o 00:24:45.402 CC lib/log/log_flags.o 00:24:45.402 CC lib/log/log_deprecated.o 00:24:45.402 LIB libspdk_ut_mock.a 00:24:45.402 LIB libspdk_ut.a 00:24:45.402 LIB libspdk_log.a 00:24:45.402 CXX lib/trace_parser/trace.o 00:24:45.402 CC lib/ioat/ioat.o 00:24:45.402 CC lib/util/base64.o 00:24:45.402 CC lib/dma/dma.o 00:24:45.402 CC lib/util/bit_array.o 00:24:45.402 CC lib/util/cpuset.o 00:24:45.402 CC lib/util/crc16.o 00:24:45.402 CC lib/util/crc32.o 00:24:45.402 CC lib/util/crc32c.o 00:24:45.402 CC lib/vfio_user/host/vfio_user_pci.o 00:24:45.402 CC lib/util/crc32_ieee.o 00:24:45.402 LIB libspdk_dma.a 00:24:45.402 CC lib/util/crc64.o 00:24:45.402 CC lib/vfio_user/host/vfio_user.o 00:24:45.402 CC lib/util/dif.o 00:24:45.402 CC lib/util/fd.o 00:24:45.402 LIB libspdk_ioat.a 00:24:45.402 CC lib/util/file.o 00:24:45.402 CC lib/util/hexlify.o 00:24:45.402 CC lib/util/iov.o 00:24:45.402 CC lib/util/math.o 00:24:45.402 CC lib/util/pipe.o 00:24:45.402 LIB libspdk_vfio_user.a 00:24:45.402 CC lib/util/strerror_tls.o 00:24:45.402 CC lib/util/string.o 00:24:45.402 CC lib/util/uuid.o 00:24:45.402 CC lib/util/fd_group.o 00:24:45.402 LIB libspdk_trace_parser.a 00:24:45.402 CC lib/util/xor.o 00:24:45.402 CC lib/util/zipf.o 00:24:45.402 LIB libspdk_util.a 00:24:45.402 CC lib/json/json_parse.o 00:24:45.402 CC lib/idxd/idxd.o 00:24:45.402 CC lib/rdma/common.o 00:24:45.402 CC lib/json/json_util.o 00:24:45.402 CC lib/conf/conf.o 00:24:45.402 CC lib/vmd/vmd.o 00:24:45.402 CC lib/env_dpdk/env.o 00:24:45.402 CC lib/json/json_write.o 00:24:45.402 CC lib/idxd/idxd_user.o 00:24:45.402 CC lib/rdma/rdma_verbs.o 00:24:45.402 LIB libspdk_conf.a 00:24:45.402 CC lib/env_dpdk/memory.o 00:24:45.402 CC lib/vmd/led.o 00:24:45.402 CC lib/env_dpdk/pci.o 00:24:45.402 CC lib/env_dpdk/init.o 00:24:45.402 CC lib/env_dpdk/threads.o 00:24:45.402 LIB libspdk_rdma.a 00:24:45.402 LIB libspdk_json.a 00:24:45.402 LIB libspdk_idxd.a 00:24:45.402 CC lib/env_dpdk/pci_ioat.o 00:24:45.402 CC lib/env_dpdk/pci_virtio.o 00:24:45.402 LIB libspdk_vmd.a 00:24:45.402 CC lib/env_dpdk/pci_vmd.o 00:24:45.402 CC lib/env_dpdk/pci_idxd.o 00:24:45.402 CC lib/env_dpdk/pci_event.o 00:24:45.402 CC lib/jsonrpc/jsonrpc_server.o 00:24:45.402 CC lib/env_dpdk/sigbus_handler.o 00:24:45.402 CC lib/env_dpdk/pci_dpdk.o 00:24:45.402 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:24:45.402 CC lib/env_dpdk/pci_dpdk_2207.o 00:24:45.402 CC lib/env_dpdk/pci_dpdk_2211.o 00:24:45.402 CC lib/jsonrpc/jsonrpc_client.o 00:24:45.402 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:24:45.402 LIB libspdk_jsonrpc.a 00:24:45.402 CC lib/rpc/rpc.o 00:24:45.402 LIB libspdk_env_dpdk.a 00:24:45.402 LIB libspdk_rpc.a 00:24:45.402 CC lib/notify/notify.o 00:24:45.402 CC lib/sock/sock.o 00:24:45.402 CC lib/notify/notify_rpc.o 00:24:45.402 CC lib/sock/sock_rpc.o 00:24:45.402 CC lib/trace/trace.o 00:24:45.402 CC lib/trace/trace_flags.o 00:24:45.402 CC lib/trace/trace_rpc.o 00:24:45.402 LIB libspdk_notify.a 00:24:45.402 LIB libspdk_trace.a 00:24:45.402 LIB libspdk_sock.a 00:24:45.402 CC lib/thread/thread.o 00:24:45.402 CC lib/thread/iobuf.o 00:24:45.402 CC lib/nvme/nvme_ctrlr_cmd.o 00:24:45.402 CC lib/nvme/nvme_ctrlr.o 00:24:45.402 CC lib/nvme/nvme_fabric.o 00:24:45.402 CC lib/nvme/nvme_ns_cmd.o 00:24:45.402 CC lib/nvme/nvme_ns.o 00:24:45.402 CC lib/nvme/nvme_pcie_common.o 00:24:45.402 CC lib/nvme/nvme_pcie.o 00:24:45.402 CC lib/nvme/nvme_qpair.o 00:24:45.402 CC lib/nvme/nvme.o 00:24:45.402 LIB libspdk_thread.a 00:24:45.402 CC lib/accel/accel.o 00:24:45.402 CC lib/nvme/nvme_quirks.o 00:24:45.402 CC lib/nvme/nvme_transport.o 00:24:45.402 CC lib/nvme/nvme_discovery.o 00:24:45.402 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:24:45.402 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:24:45.402 CC lib/nvme/nvme_tcp.o 00:24:45.402 CC lib/nvme/nvme_opal.o 00:24:45.402 CC lib/blob/blobstore.o 00:24:45.402 CC lib/accel/accel_rpc.o 00:24:45.402 CC lib/nvme/nvme_io_msg.o 00:24:45.402 CC lib/accel/accel_sw.o 00:24:45.402 CC lib/nvme/nvme_poll_group.o 00:24:45.402 CC lib/nvme/nvme_zns.o 00:24:45.402 CC lib/nvme/nvme_cuse.o 00:24:45.402 CC lib/nvme/nvme_vfio_user.o 00:24:45.402 LIB libspdk_accel.a 00:24:45.403 CC lib/init/json_config.o 00:24:45.403 CC lib/virtio/virtio.o 00:24:45.403 CC lib/bdev/bdev.o 00:24:45.403 CC lib/init/subsystem.o 00:24:45.403 CC lib/virtio/virtio_vhost_user.o 00:24:45.403 CC lib/nvme/nvme_rdma.o 00:24:45.403 CC lib/init/subsystem_rpc.o 00:24:45.403 CC lib/init/rpc.o 00:24:45.403 CC lib/virtio/virtio_vfio_user.o 00:24:45.403 CC lib/virtio/virtio_pci.o 00:24:45.403 CC lib/blob/request.o 00:24:45.403 CC lib/bdev/bdev_rpc.o 00:24:45.403 CC lib/bdev/bdev_zone.o 00:24:45.403 LIB libspdk_init.a 00:24:45.403 CC lib/blob/zeroes.o 00:24:45.403 CC lib/bdev/part.o 00:24:45.403 CC lib/blob/blob_bs_dev.o 00:24:45.403 CC lib/bdev/scsi_nvme.o 00:24:45.403 LIB libspdk_virtio.a 00:24:45.403 LIB libspdk_blob.a 00:24:45.403 CC lib/event/app.o 00:24:45.403 CC lib/event/reactor.o 00:24:45.403 CC lib/event/log_rpc.o 00:24:45.403 CC lib/event/app_rpc.o 00:24:45.403 CC lib/event/scheduler_static.o 00:24:45.403 CC lib/blobfs/blobfs.o 00:24:45.403 CC lib/lvol/lvol.o 00:24:45.403 CC lib/blobfs/tree.o 00:24:45.403 LIB libspdk_event.a 00:24:45.403 LIB libspdk_nvme.a 00:24:45.403 LIB libspdk_bdev.a 00:24:45.403 LIB libspdk_blobfs.a 00:24:45.403 LIB libspdk_lvol.a 00:24:45.403 CC lib/scsi/dev.o 00:24:45.403 CC lib/nvmf/ctrlr.o 00:24:45.403 CC lib/nbd/nbd.o 00:24:45.403 CC lib/nbd/nbd_rpc.o 00:24:45.403 CC lib/scsi/lun.o 00:24:45.403 CC lib/nvmf/ctrlr_discovery.o 00:24:45.403 CC lib/nvmf/ctrlr_bdev.o 00:24:45.403 CC lib/scsi/port.o 00:24:45.403 CC lib/ftl/ftl_core.o 00:24:45.403 CC lib/nvmf/subsystem.o 00:24:45.403 CC lib/ftl/ftl_init.o 00:24:45.403 CC lib/scsi/scsi.o 00:24:45.403 CC lib/scsi/scsi_bdev.o 00:24:45.403 CC lib/ftl/ftl_layout.o 00:24:45.403 CC lib/nvmf/nvmf.o 00:24:45.403 LIB libspdk_nbd.a 00:24:45.403 CC lib/nvmf/nvmf_rpc.o 00:24:45.403 CC lib/ftl/ftl_debug.o 00:24:45.403 CC lib/scsi/scsi_pr.o 00:24:45.403 CC lib/scsi/scsi_rpc.o 00:24:45.403 CC lib/ftl/ftl_io.o 00:24:45.403 CC lib/ftl/ftl_sb.o 00:24:45.403 CC lib/ftl/ftl_l2p.o 00:24:45.403 CC lib/scsi/task.o 00:24:45.403 CC lib/nvmf/transport.o 00:24:45.403 CC lib/ftl/ftl_l2p_flat.o 00:24:45.403 CC lib/nvmf/tcp.o 00:24:45.403 CC lib/nvmf/rdma.o 00:24:45.403 CC lib/ftl/ftl_nv_cache.o 00:24:45.403 CC lib/ftl/ftl_band.o 00:24:45.403 LIB libspdk_scsi.a 00:24:45.403 CC lib/ftl/ftl_band_ops.o 00:24:45.403 CC lib/ftl/ftl_writer.o 00:24:45.403 CC lib/ftl/ftl_rq.o 00:24:45.403 CC lib/ftl/ftl_reloc.o 00:24:45.403 CC lib/vhost/vhost.o 00:24:45.403 CC lib/iscsi/conn.o 00:24:45.403 CC lib/vhost/vhost_rpc.o 00:24:45.403 CC lib/ftl/ftl_l2p_cache.o 00:24:45.403 CC lib/ftl/ftl_p2l.o 00:24:45.403 CC lib/iscsi/init_grp.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:24:45.403 CC lib/vhost/vhost_scsi.o 00:24:45.403 CC lib/iscsi/iscsi.o 00:24:45.403 CC lib/iscsi/md5.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_startup.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_md.o 00:24:45.403 CC lib/vhost/vhost_blk.o 00:24:45.403 LIB libspdk_nvmf.a 00:24:45.403 CC lib/iscsi/param.o 00:24:45.403 CC lib/iscsi/portal_grp.o 00:24:45.403 CC lib/iscsi/tgt_node.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_misc.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:24:45.403 CC lib/vhost/rte_vhost_user.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:24:45.403 CC lib/iscsi/iscsi_subsystem.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_band.o 00:24:45.403 CC lib/iscsi/iscsi_rpc.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:24:45.403 CC lib/iscsi/task.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:24:45.403 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:24:45.403 CC lib/ftl/utils/ftl_conf.o 00:24:45.403 CC lib/ftl/utils/ftl_md.o 00:24:45.403 CC lib/ftl/utils/ftl_mempool.o 00:24:45.403 CC lib/ftl/utils/ftl_bitmap.o 00:24:45.403 LIB libspdk_iscsi.a 00:24:45.403 CC lib/ftl/utils/ftl_property.o 00:24:45.403 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:24:45.403 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:24:45.403 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:24:45.403 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:24:45.403 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:24:45.403 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:24:45.403 CC lib/ftl/upgrade/ftl_sb_v3.o 00:24:45.403 CC lib/ftl/upgrade/ftl_sb_v5.o 00:24:45.403 CC lib/ftl/nvc/ftl_nvc_dev.o 00:24:45.403 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:24:45.403 CC lib/ftl/base/ftl_base_dev.o 00:24:45.403 CC lib/ftl/base/ftl_base_bdev.o 00:24:45.403 LIB libspdk_vhost.a 00:24:45.403 LIB libspdk_ftl.a 00:24:45.403 CC module/env_dpdk/env_dpdk_rpc.o 00:24:45.403 CC module/scheduler/gscheduler/gscheduler.o 00:24:45.403 CC module/scheduler/dynamic/scheduler_dynamic.o 00:24:45.403 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:24:45.403 CC module/accel/error/accel_error.o 00:24:45.403 CC module/sock/posix/posix.o 00:24:45.403 CC module/accel/iaa/accel_iaa.o 00:24:45.403 CC module/accel/dsa/accel_dsa.o 00:24:45.403 CC module/accel/ioat/accel_ioat.o 00:24:45.403 CC module/blob/bdev/blob_bdev.o 00:24:45.403 LIB libspdk_env_dpdk_rpc.a 00:24:45.403 CC module/accel/error/accel_error_rpc.o 00:24:45.403 LIB libspdk_scheduler_gscheduler.a 00:24:45.403 LIB libspdk_scheduler_dpdk_governor.a 00:24:45.403 CC module/accel/ioat/accel_ioat_rpc.o 00:24:45.403 LIB libspdk_scheduler_dynamic.a 00:24:45.403 CC module/accel/iaa/accel_iaa_rpc.o 00:24:45.403 CC module/accel/dsa/accel_dsa_rpc.o 00:24:45.403 LIB libspdk_blob_bdev.a 00:24:45.403 LIB libspdk_accel_error.a 00:24:45.403 LIB libspdk_accel_ioat.a 00:24:45.403 LIB libspdk_accel_iaa.a 00:24:45.403 LIB libspdk_accel_dsa.a 00:24:45.403 LIB libspdk_sock_posix.a 00:24:45.403 CC module/bdev/gpt/gpt.o 00:24:45.403 CC module/bdev/error/vbdev_error.o 00:24:45.403 CC module/bdev/malloc/bdev_malloc.o 00:24:45.403 CC module/bdev/lvol/vbdev_lvol.o 00:24:45.403 CC module/blobfs/bdev/blobfs_bdev.o 00:24:45.403 CC module/bdev/null/bdev_null.o 00:24:45.403 CC module/bdev/nvme/bdev_nvme.o 00:24:45.403 CC module/bdev/delay/vbdev_delay.o 00:24:45.403 CC module/bdev/passthru/vbdev_passthru.o 00:24:45.403 CC module/bdev/gpt/vbdev_gpt.o 00:24:45.403 CC module/bdev/raid/bdev_raid.o 00:24:45.403 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:24:45.403 CC module/bdev/error/vbdev_error_rpc.o 00:24:45.403 CC module/bdev/null/bdev_null_rpc.o 00:24:45.403 CC module/bdev/malloc/bdev_malloc_rpc.o 00:24:45.403 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:24:45.403 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:24:45.403 CC module/bdev/delay/vbdev_delay_rpc.o 00:24:45.403 LIB libspdk_blobfs_bdev.a 00:24:45.403 LIB libspdk_bdev_gpt.a 00:24:45.403 LIB libspdk_bdev_error.a 00:24:45.403 CC module/bdev/raid/bdev_raid_rpc.o 00:24:45.403 LIB libspdk_bdev_null.a 00:24:45.403 CC module/bdev/raid/bdev_raid_sb.o 00:24:45.403 LIB libspdk_bdev_malloc.a 00:24:45.403 LIB libspdk_bdev_passthru.a 00:24:45.403 LIB libspdk_bdev_delay.a 00:24:45.403 CC module/bdev/split/vbdev_split.o 00:24:45.403 CC module/bdev/raid/raid0.o 00:24:45.403 CC module/bdev/zone_block/vbdev_zone_block.o 00:24:45.403 CC module/bdev/aio/bdev_aio.o 00:24:45.403 LIB libspdk_bdev_lvol.a 00:24:45.403 CC module/bdev/raid/raid1.o 00:24:45.403 CC module/bdev/ftl/bdev_ftl.o 00:24:45.403 CC module/bdev/raid/concat.o 00:24:45.403 CC module/bdev/virtio/bdev_virtio_scsi.o 00:24:45.403 CC module/bdev/split/vbdev_split_rpc.o 00:24:45.403 CC module/bdev/ftl/bdev_ftl_rpc.o 00:24:45.403 CC module/bdev/daos/bdev_daos.o 00:24:45.403 CC module/bdev/daos/bdev_daos_rpc.o 00:24:45.403 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:24:45.403 LIB libspdk_bdev_raid.a 00:24:45.403 CC module/bdev/aio/bdev_aio_rpc.o 00:24:45.403 CC module/bdev/virtio/bdev_virtio_blk.o 00:24:45.403 CC module/bdev/virtio/bdev_virtio_rpc.o 00:24:45.403 CC module/bdev/nvme/bdev_nvme_rpc.o 00:24:45.403 LIB libspdk_bdev_split.a 00:24:45.403 LIB libspdk_bdev_ftl.a 00:24:45.403 CC module/bdev/nvme/nvme_rpc.o 00:24:45.403 LIB libspdk_bdev_aio.a 00:24:45.403 CC module/bdev/nvme/bdev_mdns_client.o 00:24:45.403 LIB libspdk_bdev_zone_block.a 00:24:45.403 CC module/bdev/nvme/vbdev_opal.o 00:24:45.403 CC module/bdev/nvme/vbdev_opal_rpc.o 00:24:45.403 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:24:45.403 LIB libspdk_bdev_daos.a 00:24:45.403 LIB libspdk_bdev_virtio.a 00:24:45.403 LIB libspdk_bdev_nvme.a 00:24:45.403 CC module/event/subsystems/iobuf/iobuf.o 00:24:45.403 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:24:45.403 CC module/event/subsystems/vmd/vmd.o 00:24:45.403 CC module/event/subsystems/sock/sock.o 00:24:45.403 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:24:45.403 CC module/event/subsystems/scheduler/scheduler.o 00:24:45.403 CC module/event/subsystems/vmd/vmd_rpc.o 00:24:45.403 LIB libspdk_event_sock.a 00:24:45.403 LIB libspdk_event_vhost_blk.a 00:24:45.403 LIB libspdk_event_scheduler.a 00:24:45.403 LIB libspdk_event_iobuf.a 00:24:45.403 LIB libspdk_event_vmd.a 00:24:45.403 CC module/event/subsystems/accel/accel.o 00:24:45.403 LIB libspdk_event_accel.a 00:24:45.403 CC module/event/subsystems/bdev/bdev.o 00:24:45.403 LIB libspdk_event_bdev.a 00:24:45.403 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:24:45.403 CC module/event/subsystems/nbd/nbd.o 00:24:45.403 CC module/event/subsystems/scsi/scsi.o 00:24:45.404 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:24:45.404 LIB libspdk_event_nbd.a 00:24:45.404 LIB libspdk_event_scsi.a 00:24:45.404 LIB libspdk_event_nvmf.a 00:24:45.404 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:24:45.404 CC module/event/subsystems/iscsi/iscsi.o 00:24:45.404 LIB libspdk_event_vhost_scsi.a 00:24:45.404 LIB libspdk_event_iscsi.a 00:24:45.404 TEST_HEADER include/spdk/config.h 00:24:45.404 CXX app/trace/trace.o 00:24:45.404 CXX test/cpp_headers/rpc.o 00:24:45.404 CC test/event/event_perf/event_perf.o 00:24:45.404 CC examples/accel/perf/accel_perf.o 00:24:45.404 CC test/bdev/bdevio/bdevio.o 00:24:45.404 CC test/accel/dif/dif.o 00:24:45.404 CC test/blobfs/mkfs/mkfs.o 00:24:45.404 CC test/dma/test_dma/test_dma.o 00:24:45.404 CC test/env/mem_callbacks/mem_callbacks.o 00:24:45.404 CXX test/cpp_headers/vfio_user_spec.o 00:24:45.404 CC test/app/bdev_svc/bdev_svc.o 00:24:45.663 LINK event_perf 00:24:45.663 LINK spdk_trace 00:24:45.663 LINK bdev_svc 00:24:45.663 LINK accel_perf 00:24:45.663 LINK bdevio 00:24:45.663 CXX test/cpp_headers/accel_module.o 00:24:45.663 LINK mkfs 00:24:45.663 LINK dif 00:24:45.663 LINK test_dma 00:24:45.921 CXX test/cpp_headers/bit_pool.o 00:24:45.921 LINK mem_callbacks 00:24:45.921 CXX test/cpp_headers/ioat.o 00:24:46.180 CXX test/cpp_headers/blobfs.o 00:24:46.180 CXX test/cpp_headers/pipe.o 00:24:46.180 CXX test/cpp_headers/accel.o 00:24:46.439 CXX test/cpp_headers/version.o 00:24:46.439 CXX test/cpp_headers/trace_parser.o 00:24:46.697 CXX test/cpp_headers/opal_spec.o 00:24:47.265 CXX test/cpp_headers/uuid.o 00:24:47.833 CXX test/cpp_headers/bdev.o 00:24:48.768 CXX test/cpp_headers/hexlify.o 00:24:49.026 CXX test/cpp_headers/likely.o 00:24:49.591 CXX test/cpp_headers/vhost.o 00:24:50.156 CXX test/cpp_headers/memory.o 00:24:50.721 CC app/trace_record/trace_record.o 00:24:50.979 CXX test/cpp_headers/vfio_user_pci.o 00:24:51.237 LINK spdk_trace_record 00:24:51.804 CXX test/cpp_headers/dma.o 00:24:52.740 CXX test/cpp_headers/nbd.o 00:24:53.305 CXX test/cpp_headers/env.o 00:24:54.239 CXX test/cpp_headers/nvme_zns.o 00:24:55.611 CXX test/cpp_headers/env_dpdk.o 00:24:57.024 CXX test/cpp_headers/init.o 00:24:58.400 CXX test/cpp_headers/fd_group.o 00:24:59.776 CXX test/cpp_headers/bdev_module.o 00:25:01.152 CXX test/cpp_headers/opal.o 00:25:02.529 CXX test/cpp_headers/event.o 00:25:03.464 CXX test/cpp_headers/base64.o 00:25:04.841 CXX test/cpp_headers/nvmf.o 00:25:05.099 CC test/env/vtophys/vtophys.o 00:25:06.037 CXX test/cpp_headers/nvmf_spec.o 00:25:06.037 LINK vtophys 00:25:06.972 CXX test/cpp_headers/blobfs_bdev.o 00:25:08.349 CXX test/cpp_headers/fd.o 00:25:09.285 CXX test/cpp_headers/barrier.o 00:25:10.222 CXX test/cpp_headers/nvmf_fc_spec.o 00:25:11.598 CXX test/cpp_headers/zipf.o 00:25:12.535 CC test/event/reactor/reactor.o 00:25:12.535 CXX test/cpp_headers/scheduler.o 00:25:13.471 LINK reactor 00:25:13.471 CXX test/cpp_headers/dif.o 00:25:14.423 CXX test/cpp_headers/scsi_spec.o 00:25:14.989 CC app/nvmf_tgt/nvmf_main.o 00:25:15.556 CXX test/cpp_headers/blob.o 00:25:15.814 LINK nvmf_tgt 00:25:16.749 CXX test/cpp_headers/cpuset.o 00:25:17.685 CXX test/cpp_headers/thread.o 00:25:18.253 CC examples/bdev/hello_world/hello_bdev.o 00:25:18.820 CXX test/cpp_headers/tree.o 00:25:19.390 CXX test/cpp_headers/xor.o 00:25:19.648 LINK hello_bdev 00:25:20.216 CXX test/cpp_headers/assert.o 00:25:21.593 CXX test/cpp_headers/file.o 00:25:22.529 CXX test/cpp_headers/endian.o 00:25:23.906 CXX test/cpp_headers/notify.o 00:25:24.844 CXX test/cpp_headers/util.o 00:25:26.221 CXX test/cpp_headers/log.o 00:25:27.178 CXX test/cpp_headers/sock.o 00:25:28.144 CXX test/cpp_headers/nvme_ocssd_spec.o 00:25:29.079 CXX test/cpp_headers/config.o 00:25:29.644 CXX test/cpp_headers/histogram_data.o 00:25:29.644 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:25:30.580 CXX test/cpp_headers/nvme_intel.o 00:25:30.580 LINK env_dpdk_post_init 00:25:31.515 CXX test/cpp_headers/idxd_spec.o 00:25:32.450 CXX test/cpp_headers/crc16.o 00:25:33.824 CXX test/cpp_headers/bdev_zone.o 00:25:35.199 CXX test/cpp_headers/stdinc.o 00:25:36.135 CXX test/cpp_headers/vmd.o 00:25:37.071 CXX test/cpp_headers/scsi.o 00:25:38.973 CXX test/cpp_headers/jsonrpc.o 00:25:40.347 CXX test/cpp_headers/blob_bdev.o 00:25:41.723 CXX test/cpp_headers/crc32.o 00:25:42.689 CXX test/cpp_headers/nvmf_transport.o 00:25:44.588 CXX test/cpp_headers/idxd.o 00:25:45.964 CXX test/cpp_headers/crc64.o 00:25:46.899 CXX test/cpp_headers/nvme.o 00:25:48.274 CXX test/cpp_headers/iscsi_spec.o 00:25:48.532 CC test/event/reactor_perf/reactor_perf.o 00:25:49.468 CXX test/cpp_headers/queue.o 00:25:49.468 LINK reactor_perf 00:25:49.725 CXX test/cpp_headers/nvmf_cmd.o 00:25:50.658 CXX test/cpp_headers/lvol.o 00:25:51.591 CXX test/cpp_headers/ftl.o 00:25:52.528 CXX test/cpp_headers/trace.o 00:25:53.096 CXX test/cpp_headers/ioat_spec.o 00:25:53.664 CXX test/cpp_headers/conf.o 00:25:54.231 CXX test/cpp_headers/ublk.o 00:25:55.167 CXX test/cpp_headers/bit_array.o 00:25:55.734 CXX test/cpp_headers/pci_ids.o 00:25:56.299 CXX test/cpp_headers/nvme_spec.o 00:25:56.866 CC test/env/memory/memory_ut.o 00:25:57.126 CXX test/cpp_headers/string.o 00:25:57.692 CXX test/cpp_headers/gpt_spec.o 00:25:58.258 CXX test/cpp_headers/nvme_ocssd.o 00:25:58.848 LINK memory_ut 00:25:58.848 CC test/event/app_repeat/app_repeat.o 00:25:58.848 CXX test/cpp_headers/json.o 00:25:59.795 CXX test/cpp_headers/reduce.o 00:25:59.795 LINK app_repeat 00:26:00.053 CC test/event/scheduler/scheduler.o 00:26:00.053 CXX test/cpp_headers/mmio.o 00:26:00.622 LINK scheduler 00:26:01.189 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:26:01.448 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:26:02.384 CC test/lvol/esnap/esnap.o 00:26:02.643 LINK nvme_fuzz 00:26:02.643 CC examples/bdev/bdevperf/bdevperf.o 00:26:02.902 CC test/env/pci/pci_ut.o 00:26:03.470 LINK iscsi_fuzz 00:26:03.728 LINK bdevperf 00:26:03.987 LINK pci_ut 00:26:06.520 LINK esnap 00:26:16.498 CC app/iscsi_tgt/iscsi_tgt.o 00:26:16.498 LINK iscsi_tgt 00:26:26.529 CC test/app/histogram_perf/histogram_perf.o 00:26:26.529 LINK histogram_perf 00:26:27.464 CC test/nvme/aer/aer.o 00:26:28.840 LINK aer 00:26:34.110 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:26:34.368 CC app/spdk_tgt/spdk_tgt.o 00:26:34.368 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:26:35.742 LINK spdk_tgt 00:26:35.742 LINK vhost_fuzz 00:26:39.044 CC test/app/jsoncat/jsoncat.o 00:26:39.302 CC app/spdk_lspci/spdk_lspci.o 00:26:39.559 LINK jsoncat 00:26:40.125 LINK spdk_lspci 00:26:40.125 CC test/rpc_client/rpc_client_test.o 00:26:41.060 LINK rpc_client_test 00:26:46.325 CC test/app/stub/stub.o 00:26:47.702 LINK stub 00:27:02.577 CC test/nvme/reset/reset.o 00:27:03.952 LINK reset 00:27:03.952 CC test/nvme/sgl/sgl.o 00:27:04.885 LINK sgl 00:27:07.412 CC test/thread/poller_perf/poller_perf.o 00:27:07.669 LINK poller_perf 00:27:08.604 CC test/nvme/e2edp/nvme_dp.o 00:27:09.169 LINK nvme_dp 00:27:14.433 CC test/nvme/overhead/overhead.o 00:27:15.368 LINK overhead 00:27:17.272 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:27:18.648 LINK histogram_ut 00:27:21.930 CC examples/blob/hello_world/hello_blob.o 00:27:22.865 LINK hello_blob 00:27:25.396 CC test/unit/lib/accel/accel.c/accel_ut.o 00:27:26.772 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:27:28.673 LINK accel_ut 00:27:32.857 LINK bdev_ut 00:27:33.903 CC test/thread/lock/spdk_lock.o 00:27:34.858 CC examples/ioat/perf/perf.o 00:27:35.426 LINK ioat_perf 00:27:35.684 CC test/nvme/err_injection/err_injection.o 00:27:35.942 LINK spdk_lock 00:27:36.505 LINK err_injection 00:27:37.438 CC examples/nvme/hello_world/hello_world.o 00:27:38.373 LINK hello_world 00:27:38.940 CC examples/nvme/reconnect/reconnect.o 00:27:39.873 LINK reconnect 00:27:39.873 CC test/nvme/startup/startup.o 00:27:40.809 LINK startup 00:27:42.710 CC examples/sock/hello_world/hello_sock.o 00:27:44.085 LINK hello_sock 00:27:47.368 CC test/unit/lib/bdev/part.c/part_ut.o 00:27:55.476 LINK part_ut 00:27:58.007 CC examples/ioat/verify/verify.o 00:27:59.382 LINK verify 00:28:04.649 CC test/nvme/reserve/reserve.o 00:28:06.023 LINK reserve 00:28:06.589 CC app/spdk_nvme_perf/perf.o 00:28:08.490 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:28:08.490 LINK spdk_nvme_perf 00:28:09.068 CC test/nvme/simple_copy/simple_copy.o 00:28:09.068 LINK scsi_nvme_ut 00:28:09.326 LINK simple_copy 00:28:11.225 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:28:11.483 LINK gpt_ut 00:28:11.483 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:28:11.741 CC test/nvme/connect_stress/connect_stress.o 00:28:12.000 CC examples/nvme/nvme_manage/nvme_manage.o 00:28:12.567 LINK connect_stress 00:28:13.133 LINK nvme_manage 00:28:13.391 LINK vbdev_lvol_ut 00:28:16.000 CC examples/vmd/lsvmd/lsvmd.o 00:28:16.000 CC examples/nvmf/nvmf/nvmf.o 00:28:16.000 LINK lsvmd 00:28:16.568 LINK nvmf 00:28:16.827 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:28:22.093 LINK bdev_ut 00:28:22.093 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:28:25.377 CC examples/blob/cli/blobcli.o 00:28:25.943 LINK bdev_raid_ut 00:28:26.875 LINK blobcli 00:28:32.137 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:28:32.395 LINK bdev_raid_sb_ut 00:28:32.396 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:28:33.332 LINK concat_ut 00:28:34.709 CC app/spdk_nvme_identify/identify.o 00:28:35.645 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:28:35.645 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:28:35.645 LINK spdk_nvme_identify 00:28:36.212 LINK bdev_zone_ut 00:28:36.471 LINK raid1_ut 00:28:37.406 CC examples/util/zipf/zipf.o 00:28:37.665 LINK zipf 00:28:38.232 CC test/nvme/boot_partition/boot_partition.o 00:28:38.232 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:28:38.490 CC test/nvme/compliance/nvme_compliance.o 00:28:38.490 LINK boot_partition 00:28:38.749 CC examples/nvme/arbitration/arbitration.o 00:28:39.007 LINK vbdev_zone_block_ut 00:28:39.265 LINK nvme_compliance 00:28:39.831 LINK arbitration 00:28:42.358 CC examples/vmd/led/led.o 00:28:42.616 LINK led 00:28:44.517 CC test/nvme/fused_ordering/fused_ordering.o 00:28:45.891 LINK fused_ordering 00:28:52.474 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:29:00.603 LINK bdev_nvme_ut 00:29:05.872 CC examples/thread/thread/thread_ex.o 00:29:06.439 LINK thread 00:29:11.702 CC app/spdk_nvme_discover/discovery_aer.o 00:29:11.702 LINK spdk_nvme_discover 00:29:13.080 CC examples/nvme/hotplug/hotplug.o 00:29:13.339 CC test/nvme/doorbell_aers/doorbell_aers.o 00:29:13.598 LINK hotplug 00:29:13.598 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:29:13.598 CC test/nvme/fdp/fdp.o 00:29:13.857 LINK doorbell_aers 00:29:14.115 LINK blob_bdev_ut 00:29:14.374 LINK fdp 00:29:15.310 CC examples/idxd/perf/perf.o 00:29:16.245 LINK idxd_perf 00:29:18.148 CC test/unit/lib/blob/blob.c/blob_ut.o 00:29:21.431 CC examples/interrupt_tgt/interrupt_tgt.o 00:29:21.996 CC test/nvme/cuse/cuse.o 00:29:22.254 LINK interrupt_tgt 00:29:25.533 LINK cuse 00:29:26.100 CC examples/nvme/cmb_copy/cmb_copy.o 00:29:27.474 LINK cmb_copy 00:29:29.374 LINK blob_ut 00:29:32.652 CC examples/nvme/abort/abort.o 00:29:33.590 LINK abort 00:29:37.813 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:29:38.748 LINK pmr_persistence 00:29:41.281 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:29:41.848 LINK tree_ut 00:29:42.416 CC app/spdk_top/spdk_top.o 00:29:44.317 LINK spdk_top 00:29:44.575 CC test/unit/lib/dma/dma.c/dma_ut.o 00:29:44.575 CC test/unit/lib/event/app.c/app_ut.o 00:29:45.141 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:29:45.400 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:29:45.658 LINK dma_ut 00:29:45.916 LINK app_ut 00:29:47.290 LINK reactor_ut 00:29:47.857 LINK blobfs_async_ut 00:29:52.039 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:29:52.974 LINK ioat_ut 00:29:53.910 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:29:55.811 LINK conn_ut 00:29:57.713 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:29:58.279 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:29:58.537 LINK jsonrpc_server_ut 00:29:58.537 LINK json_parse_ut 00:29:58.795 CC app/vhost/vhost.o 00:29:59.053 LINK vhost 00:29:59.053 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:29:59.617 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:29:59.617 LINK blobfs_sync_ut 00:29:59.617 CC app/spdk_dd/spdk_dd.o 00:29:59.617 LINK init_grp_ut 00:29:59.875 CC app/fio/nvme/fio_plugin.o 00:29:59.875 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:29:59.875 LINK spdk_dd 00:30:00.133 LINK json_util_ut 00:30:00.391 CC app/fio/bdev/fio_plugin.o 00:30:00.391 LINK spdk_nvme 00:30:00.649 LINK spdk_bdev 00:30:00.649 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:30:00.649 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:30:00.649 LINK blobfs_bdev_ut 00:30:01.216 CC test/unit/lib/log/log.c/log_ut.o 00:30:01.474 LINK iscsi_ut 00:30:01.474 LINK log_ut 00:30:01.733 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:30:02.669 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:30:02.928 LINK lvol_ut 00:30:03.494 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:30:03.753 LINK json_write_ut 00:30:04.319 CC test/unit/lib/notify/notify.c/notify_ut.o 00:30:05.319 LINK notify_ut 00:30:05.578 LINK nvme_ut 00:30:08.864 CC test/unit/lib/iscsi/param.c/param_ut.o 00:30:10.243 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:30:10.243 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:30:10.243 LINK param_ut 00:30:12.774 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:30:13.341 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:30:14.282 LINK tcp_ut 00:30:14.540 LINK dev_ut 00:30:14.799 LINK nvme_ctrlr_ut 00:30:15.366 LINK nvme_ctrlr_cmd_ut 00:30:15.933 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:30:17.837 LINK portal_grp_ut 00:30:20.369 In function '_mm256_storeu_si256', 00:30:20.369 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:30:20.369 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:30:20.369 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:20.369 928 | *__P = __A; 00:30:20.369 | ^ 00:30:20.369 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:30:20.369 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:30:20.369 156 | uint8_t driver_priv_data[0]; 00:30:20.369 | ^ 00:30:20.369 In function '_mm_storeu_si128', 00:30:20.369 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:20.369 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:30:20.369 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:20.369 727 | *__P = __B; 00:30:20.369 | ^ 00:30:20.369 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:30:20.369 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:30:20.369 156 | uint8_t driver_priv_data[0]; 00:30:20.369 | ^ 00:30:20.369 In function '_mm_storeu_si128', 00:30:20.369 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:20.369 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:30:20.369 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:20.369 727 | *__P = __B; 00:30:20.369 | ^ 00:30:20.369 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:30:20.369 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:30:20.369 156 | uint8_t driver_priv_data[0]; 00:30:20.369 | ^ 00:30:20.369 In function '_mm256_storeu_si256', 00:30:20.369 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:30:20.369 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10: 00:30:20.369 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:20.369 928 | *__P = __A; 00:30:20.369 | ^ 00:30:20.369 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:30:20.369 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:30:20.369 156 | uint8_t driver_priv_data[0]; 00:30:20.369 | ^ 00:30:20.369 In function '_mm_storeu_si128', 00:30:20.369 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:20.369 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10: 00:30:20.369 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:20.369 727 | *__P = __B; 00:30:20.369 | ^ 00:30:20.369 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:30:20.369 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:30:20.369 156 | uint8_t driver_priv_data[0]; 00:30:20.369 | ^ 00:30:20.628 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:30:22.005 CC test/unit/lib/sock/sock.c/sock_ut.o 00:30:22.263 LINK lun_ut 00:30:24.797 LINK sock_ut 00:30:25.365 CC test/unit/lib/sock/posix.c/posix_ut.o 00:30:26.302 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:30:26.560 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:30:27.127 LINK posix_ut 00:30:28.072 LINK tgt_node_ut 00:30:28.639 LINK nvme_ctrlr_ocssd_cmd_ut 00:30:30.537 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:30:31.911 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:30:31.911 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:30:32.478 LINK scsi_ut 00:30:33.045 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:30:33.304 LINK ctrlr_ut 00:30:33.563 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:30:33.822 In function '_mm256_storeu_si256', 00:30:33.822 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:30:33.822 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:30:33.822 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:33.822 928 | *__P = __A; 00:30:33.822 | ^ 00:30:33.822 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:30:33.822 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:30:33.822 156 | uint8_t driver_priv_data[0]; 00:30:33.822 | ^ 00:30:33.822 In function '_mm_storeu_si128', 00:30:33.822 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:33.822 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:30:33.822 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:33.822 727 | *__P = __B; 00:30:33.822 | ^ 00:30:33.822 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:30:33.822 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:30:33.822 156 | uint8_t driver_priv_data[0]; 00:30:33.822 | ^ 00:30:33.822 In function '_mm_storeu_si128', 00:30:33.822 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:33.822 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:30:33.822 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:33.822 727 | *__P = __B; 00:30:33.822 | ^ 00:30:33.822 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:30:33.822 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:30:33.822 156 | uint8_t driver_priv_data[0]; 00:30:33.822 | ^ 00:30:33.822 In function '_mm256_storeu_si256', 00:30:33.822 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:30:33.822 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10: 00:30:33.822 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:33.822 928 | *__P = __A; 00:30:33.822 | ^ 00:30:33.822 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:30:33.822 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:30:33.822 156 | uint8_t driver_priv_data[0]; 00:30:33.822 | ^ 00:30:33.822 In function '_mm_storeu_si128', 00:30:33.822 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:33.822 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10: 00:30:33.822 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:33.822 727 | *__P = __B; 00:30:33.822 | ^ 00:30:33.822 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:30:33.822 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:30:33.822 156 | uint8_t driver_priv_data[0]; 00:30:33.822 | ^ 00:30:34.080 LINK nvme_ns_ut 00:30:35.016 LINK scsi_bdev_ut 00:30:35.583 CC test/unit/lib/thread/thread.c/thread_ut.o 00:30:36.562 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:30:37.134 LINK nvme_ns_cmd_ut 00:30:38.069 LINK scsi_pr_ut 00:30:38.069 LINK thread_ut 00:30:38.637 In function '_mm256_storeu_si256', 00:30:38.637 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:30:38.637 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:30:38.637 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:30:38.896 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:38.896 928 | *__P = __A; 00:30:38.896 | ^ 00:30:38.896 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:30:38.896 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:30:38.896 91 | uint8_t hash_key[0]; 00:30:38.896 | ^ 00:30:38.896 In function '_mm_storeu_si128', 00:30:38.896 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:38.896 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:30:38.896 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:30:38.896 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:38.896 727 | *__P = __B; 00:30:38.896 | ^ 00:30:38.896 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:30:38.896 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:30:38.896 91 | uint8_t hash_key[0]; 00:30:38.896 | ^ 00:30:38.896 In function '_mm_storeu_si128', 00:30:38.896 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:38.896 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:30:38.896 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:30:38.896 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:38.896 727 | *__P = __B; 00:30:38.896 | ^ 00:30:38.896 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:30:38.896 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:30:38.896 91 | uint8_t hash_key[0]; 00:30:38.896 | ^ 00:30:38.896 In function '_mm256_storeu_si256', 00:30:38.896 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:30:38.896 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10, 00:30:38.896 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:30:38.896 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:38.896 928 | *__P = __A; 00:30:38.896 | ^ 00:30:38.896 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:30:38.896 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:30:38.897 91 | uint8_t hash_key[0]; 00:30:38.897 | ^ 00:30:38.897 In function '_mm_storeu_si128', 00:30:38.897 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:38.897 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10, 00:30:38.897 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:30:38.897 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:38.897 727 | *__P = __B; 00:30:38.897 | ^ 00:30:38.897 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:30:38.897 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:30:38.897 91 | uint8_t hash_key[0]; 00:30:38.897 | ^ 00:30:39.830 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:30:43.113 LINK nvme_ns_ocssd_cmd_ut 00:30:43.113 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:30:45.015 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:30:45.015 LINK nvme_pcie_ut 00:30:45.015 CC test/unit/lib/util/base64.c/base64_ut.o 00:30:45.015 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:30:45.015 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:30:45.273 LINK base64_ut 00:30:45.532 LINK iobuf_ut 00:30:45.792 LINK nvme_poll_group_ut 00:30:45.792 In function '_mm256_storeu_si256', 00:30:45.792 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:30:45.792 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:30:45.792 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:30:45.792 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:45.792 928 | *__P = __A; 00:30:45.792 | ^ 00:30:45.792 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:30:45.792 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:30:45.792 91 | uint8_t hash_key[0]; 00:30:45.792 | ^ 00:30:45.792 In function '_mm_storeu_si128', 00:30:45.792 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:45.792 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:30:45.792 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:30:45.792 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:45.792 727 | *__P = __B; 00:30:45.792 | ^ 00:30:45.792 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:30:45.792 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:30:45.792 91 | uint8_t hash_key[0]; 00:30:45.792 | ^ 00:30:45.792 In function '_mm_storeu_si128', 00:30:45.792 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:45.792 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:30:45.792 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:30:45.792 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:45.792 727 | *__P = __B; 00:30:45.792 | ^ 00:30:45.792 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:30:45.792 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:30:45.792 91 | uint8_t hash_key[0]; 00:30:45.792 | ^ 00:30:45.792 In function '_mm256_storeu_si256', 00:30:45.792 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:30:45.792 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10, 00:30:45.792 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:30:45.792 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:45.792 928 | *__P = __A; 00:30:45.792 | ^ 00:30:45.792 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:30:45.792 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:30:45.792 91 | uint8_t hash_key[0]; 00:30:45.792 | ^ 00:30:45.792 In function '_mm_storeu_si128', 00:30:45.792 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:30:45.792 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10, 00:30:45.792 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:30:45.792 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:30:45.792 727 | *__P = __B; 00:30:45.792 | ^ 00:30:45.792 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:30:45.792 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:30:45.792 91 | uint8_t hash_key[0]; 00:30:45.792 | ^ 00:30:46.729 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:30:46.729 LINK subsystem_ut 00:30:47.296 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:30:47.296 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:30:47.864 LINK ctrlr_discovery_ut 00:30:47.864 LINK ctrlr_bdev_ut 00:30:47.864 LINK bit_array_ut 00:30:48.122 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:30:48.122 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:30:48.689 LINK cpuset_ut 00:30:49.255 LINK nvme_qpair_ut 00:30:49.513 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:30:50.447 LINK nvme_quirks_ut 00:30:50.447 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:30:51.014 LINK crc16_ut 00:30:51.014 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:30:51.014 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:30:51.273 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:30:51.531 LINK pci_event_ut 00:30:51.531 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:30:51.789 LINK crc32_ieee_ut 00:30:51.789 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:30:52.047 LINK nvmf_ut 00:30:52.047 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:30:52.982 LINK rdma_ut 00:30:52.982 LINK crc32c_ut 00:30:53.241 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:30:53.500 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:30:53.500 LINK subsystem_ut 00:30:53.759 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:30:53.759 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:30:53.759 LINK nvme_tcp_ut 00:30:54.018 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:30:54.018 LINK idxd_user_ut 00:30:54.018 LINK rpc_ut 00:30:54.582 LINK vhost_ut 00:30:54.840 LINK crc64_ut 00:30:55.404 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:30:55.661 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:30:55.919 CC test/unit/lib/util/dif.c/dif_ut.o 00:30:55.919 CC test/unit/lib/util/iov.c/iov_ut.o 00:30:56.178 LINK iov_ut 00:30:56.436 LINK idxd_ut 00:30:56.436 CC test/unit/lib/rdma/common.c/common_ut.o 00:30:56.694 LINK transport_ut 00:30:56.953 LINK common_ut 00:30:56.953 LINK dif_ut 00:30:57.212 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:30:57.780 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:30:58.040 CC test/unit/lib/util/math.c/math_ut.o 00:30:58.040 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:30:58.299 LINK pipe_ut 00:30:58.299 LINK math_ut 00:30:58.299 LINK nvme_transport_ut 00:30:58.558 LINK ftl_l2p_ut 00:30:58.558 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:30:58.558 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:30:59.493 LINK ftl_io_ut 00:30:59.493 LINK ftl_band_ut 00:30:59.753 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:30:59.753 CC test/unit/lib/util/string.c/string_ut.o 00:30:59.753 CC test/unit/lib/util/xor.c/xor_ut.o 00:30:59.753 LINK ftl_bitmap_ut 00:30:59.753 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:30:59.753 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:31:00.012 LINK string_ut 00:31:00.012 LINK ftl_mempool_ut 00:31:00.012 LINK xor_ut 00:31:00.580 LINK nvme_io_msg_ut 00:31:00.580 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:31:00.580 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:31:00.839 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:31:00.839 LINK ftl_mngt_ut 00:31:00.839 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:31:00.839 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:31:01.098 LINK ftl_sb_ut 00:31:01.098 LINK nvme_opal_ut 00:31:01.098 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:31:01.357 LINK nvme_fabric_ut 00:31:01.357 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:31:01.357 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:31:01.357 LINK nvme_pcie_common_ut 00:31:01.616 LINK ftl_layout_upgrade_ut 00:31:01.876 LINK nvme_cuse_ut 00:31:02.134 LINK nvme_rdma_ut 00:31:07.454 05:08:21 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:31:07.454 make[1]: Nothing to be done for 'clean'. 00:31:10.741 05:08:24 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:31:10.741 05:08:24 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:31:10.741 05:08:24 -- common/autotest_common.sh@10 -- $ set +x 00:31:10.741 05:08:24 -- spdk/autopackage.sh@48 -- $ timing_finish 00:31:10.741 05:08:24 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:10.741 05:08:24 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:10.741 05:08:24 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:10.741 + [[ -n 2721 ]] 00:31:10.741 + sudo kill 2721 00:31:10.741 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:31:10.750 [Pipeline] } 00:31:10.768 [Pipeline] // timeout 00:31:10.772 [Pipeline] } 00:31:10.788 [Pipeline] // stage 00:31:10.799 [Pipeline] } 00:31:10.816 [Pipeline] // catchError 00:31:10.824 [Pipeline] stage 00:31:10.826 [Pipeline] { (Stop VM) 00:31:10.841 [Pipeline] sh 00:31:11.120 + vagrant halt 00:31:13.655 ==> default: Halting domain... 00:31:20.235 [Pipeline] sh 00:31:20.510 + vagrant destroy -f 00:31:23.045 ==> default: Removing domain... 00:31:23.316 [Pipeline] sh 00:31:23.598 + mv output /var/jenkins/workspace/centos7-vg-autotest/output 00:31:23.607 [Pipeline] } 00:31:23.629 [Pipeline] // stage 00:31:23.635 [Pipeline] } 00:31:23.657 [Pipeline] // dir 00:31:23.663 [Pipeline] } 00:31:23.683 [Pipeline] // wrap 00:31:23.691 [Pipeline] } 00:31:23.708 [Pipeline] // catchError 00:31:23.719 [Pipeline] stage 00:31:23.721 [Pipeline] { (Epilogue) 00:31:23.737 [Pipeline] sh 00:31:24.020 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:38.919 [Pipeline] catchError 00:31:38.922 [Pipeline] { 00:31:38.941 [Pipeline] sh 00:31:39.225 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:39.225 Artifacts sizes are good 00:31:39.233 [Pipeline] } 00:31:39.247 [Pipeline] // catchError 00:31:39.260 [Pipeline] archiveArtifacts 00:31:39.266 Archiving artifacts 00:31:39.608 [Pipeline] cleanWs 00:31:39.621 [WS-CLEANUP] Deleting project workspace... 00:31:39.621 [WS-CLEANUP] Deferred wipeout is used... 00:31:39.649 [WS-CLEANUP] done 00:31:39.651 [Pipeline] } 00:31:39.670 [Pipeline] // stage 00:31:39.675 [Pipeline] } 00:31:39.692 [Pipeline] // node 00:31:39.698 [Pipeline] End of Pipeline 00:31:39.730 Finished: SUCCESS